Aug 5 22:12:16.135416 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:12:16.135531 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:12:16.135548 kernel: BIOS-provided physical RAM map: Aug 5 22:12:16.135561 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:12:16.135572 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:12:16.135584 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:12:16.135608 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Aug 5 22:12:16.135620 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Aug 5 22:12:16.135633 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Aug 5 22:12:16.135645 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:12:16.135657 kernel: NX (Execute Disable) protection: active Aug 5 22:12:16.135669 kernel: APIC: Static calls initialized Aug 5 22:12:16.135680 kernel: SMBIOS 2.7 present. Aug 5 22:12:16.135691 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 5 22:12:16.135710 kernel: Hypervisor detected: KVM Aug 5 22:12:16.135761 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:12:16.135938 kernel: kvm-clock: using sched offset of 6498103762 cycles Aug 5 22:12:16.135970 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:12:16.136401 kernel: tsc: Detected 2499.996 MHz processor Aug 5 22:12:16.136417 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:12:16.136433 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:12:16.136450 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Aug 5 22:12:16.136462 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:12:16.136474 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:12:16.136486 kernel: Using GB pages for direct mapping Aug 5 22:12:16.136497 kernel: ACPI: Early table checksum verification disabled Aug 5 22:12:16.136509 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Aug 5 22:12:16.136521 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Aug 5 22:12:16.136533 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 22:12:16.136544 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 5 22:12:16.136560 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Aug 5 22:12:16.136571 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:12:16.136583 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 22:12:16.136594 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 5 22:12:16.136606 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 22:12:16.136626 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 5 22:12:16.136638 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 5 22:12:16.136649 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:12:16.136664 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Aug 5 22:12:16.136676 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Aug 5 22:12:16.136693 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Aug 5 22:12:16.136706 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Aug 5 22:12:16.136718 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Aug 5 22:12:16.136730 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Aug 5 22:12:16.136746 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Aug 5 22:12:16.136758 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Aug 5 22:12:16.136770 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Aug 5 22:12:16.136789 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Aug 5 22:12:16.136802 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:12:16.136814 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:12:16.136826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 5 22:12:16.136839 kernel: NUMA: Initialized distance table, cnt=1 Aug 5 22:12:16.136851 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Aug 5 22:12:16.136867 kernel: Zone ranges: Aug 5 22:12:16.136879 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:12:16.136952 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Aug 5 22:12:16.137000 kernel: Normal empty Aug 5 22:12:16.137019 kernel: Movable zone start for each node Aug 5 22:12:16.137032 kernel: Early memory node ranges Aug 5 22:12:16.137045 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:12:16.137057 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Aug 5 22:12:16.137118 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Aug 5 22:12:16.137195 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:12:16.137209 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:12:16.137222 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Aug 5 22:12:16.137235 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:12:16.137247 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:12:16.137259 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 5 22:12:16.137272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:12:16.137291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:12:16.137304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:12:16.137320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:12:16.137332 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:12:16.137345 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:12:16.137358 kernel: TSC deadline timer available Aug 5 22:12:16.137370 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:12:16.137382 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:12:16.137395 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Aug 5 22:12:16.137407 kernel: Booting paravirtualized kernel on KVM Aug 5 22:12:16.137419 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:12:16.137436 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:12:16.137448 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:12:16.137466 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:12:16.137479 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:12:16.137491 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:12:16.137504 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:12:16.137518 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:12:16.137532 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:12:16.137548 kernel: random: crng init done Aug 5 22:12:16.137560 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:12:16.137573 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:12:16.137585 kernel: Fallback order for Node 0: 0 Aug 5 22:12:16.137598 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Aug 5 22:12:16.137609 kernel: Policy zone: DMA32 Aug 5 22:12:16.137627 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:12:16.137640 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Aug 5 22:12:16.137652 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:12:16.137667 kernel: Kernel/User page tables isolation: enabled Aug 5 22:12:16.137680 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:12:16.137692 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:12:16.137705 kernel: Dynamic Preempt: voluntary Aug 5 22:12:16.137717 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:12:16.137731 kernel: rcu: RCU event tracing is enabled. Aug 5 22:12:16.137743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:12:16.138077 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:12:16.138101 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:12:16.138121 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:12:16.138136 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:12:16.138152 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:12:16.138167 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:12:16.138182 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:12:16.138197 kernel: Console: colour VGA+ 80x25 Aug 5 22:12:16.138212 kernel: printk: console [ttyS0] enabled Aug 5 22:12:16.138227 kernel: ACPI: Core revision 20230628 Aug 5 22:12:16.138242 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 5 22:12:16.138257 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:12:16.138276 kernel: x2apic enabled Aug 5 22:12:16.138291 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:12:16.138317 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 5 22:12:16.138336 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Aug 5 22:12:16.138352 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:12:16.138368 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:12:16.138384 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:12:16.138399 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:12:16.138415 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:12:16.138431 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:12:16.138447 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:12:16.138463 kernel: RETBleed: Vulnerable Aug 5 22:12:16.138482 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:12:16.138497 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:12:16.138513 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:12:16.138529 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:12:16.138545 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:12:16.138560 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:12:16.138579 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:12:16.138595 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 5 22:12:16.138611 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 5 22:12:16.138627 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:12:16.138643 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:12:16.138659 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:12:16.138674 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 5 22:12:16.138754 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:12:16.138772 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 5 22:12:16.138788 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 5 22:12:16.138802 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 5 22:12:16.138822 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 5 22:12:16.138838 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 5 22:12:16.138853 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 5 22:12:16.138870 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 5 22:12:16.138886 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:12:16.138901 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:12:16.138917 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:12:16.138933 kernel: SELinux: Initializing. Aug 5 22:12:16.138949 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:12:16.138994 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:12:16.139011 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:12:16.139027 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:12:16.139047 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:12:16.139063 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:12:16.139079 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:12:16.139095 kernel: signal: max sigframe size: 3632 Aug 5 22:12:16.139468 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:12:16.139501 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:12:16.139518 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:12:16.139535 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:12:16.139551 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:12:16.139572 kernel: .... node #0, CPUs: #1 Aug 5 22:12:16.139596 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:12:16.139614 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:12:16.139630 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:12:16.139646 kernel: smpboot: Max logical packages: 1 Aug 5 22:12:16.139662 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Aug 5 22:12:16.139756 kernel: devtmpfs: initialized Aug 5 22:12:16.139778 kernel: x86/mm: Memory block size: 128MB Aug 5 22:12:16.139883 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:12:16.139901 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:12:16.139918 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:12:16.139934 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:12:16.139950 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:12:16.139990 kernel: audit: type=2000 audit(1722895935.753:1): state=initialized audit_enabled=0 res=1 Aug 5 22:12:16.140006 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:12:16.140022 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:12:16.140039 kernel: cpuidle: using governor menu Aug 5 22:12:16.140060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:12:16.140114 kernel: dca service started, version 1.12.1 Aug 5 22:12:16.140136 kernel: PCI: Using configuration type 1 for base access Aug 5 22:12:16.140154 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:12:16.140171 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:12:16.140425 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:12:16.140447 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:12:16.140463 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:12:16.140480 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:12:16.140500 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:12:16.140516 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:12:16.140532 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:12:16.140548 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:12:16.140564 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:12:16.140580 kernel: ACPI: Interpreter enabled Aug 5 22:12:16.140596 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:12:16.140612 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:12:16.140628 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:12:16.140648 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:12:16.140664 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:12:16.140680 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:12:16.140921 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:12:16.141193 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:12:16.141335 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:12:16.141357 kernel: acpiphp: Slot [3] registered Aug 5 22:12:16.141378 kernel: acpiphp: Slot [4] registered Aug 5 22:12:16.141394 kernel: acpiphp: Slot [5] registered Aug 5 22:12:16.141411 kernel: acpiphp: Slot [6] registered Aug 5 22:12:16.141426 kernel: acpiphp: Slot [7] registered Aug 5 22:12:16.141489 kernel: acpiphp: Slot [8] registered Aug 5 22:12:16.141510 kernel: acpiphp: Slot [9] registered Aug 5 22:12:16.141527 kernel: acpiphp: Slot [10] registered Aug 5 22:12:16.141543 kernel: acpiphp: Slot [11] registered Aug 5 22:12:16.141559 kernel: acpiphp: Slot [12] registered Aug 5 22:12:16.141576 kernel: acpiphp: Slot [13] registered Aug 5 22:12:16.141596 kernel: acpiphp: Slot [14] registered Aug 5 22:12:16.141611 kernel: acpiphp: Slot [15] registered Aug 5 22:12:16.141628 kernel: acpiphp: Slot [16] registered Aug 5 22:12:16.141644 kernel: acpiphp: Slot [17] registered Aug 5 22:12:16.141660 kernel: acpiphp: Slot [18] registered Aug 5 22:12:16.141676 kernel: acpiphp: Slot [19] registered Aug 5 22:12:16.141723 kernel: acpiphp: Slot [20] registered Aug 5 22:12:16.141744 kernel: acpiphp: Slot [21] registered Aug 5 22:12:16.141761 kernel: acpiphp: Slot [22] registered Aug 5 22:12:16.141781 kernel: acpiphp: Slot [23] registered Aug 5 22:12:16.141797 kernel: acpiphp: Slot [24] registered Aug 5 22:12:16.141813 kernel: acpiphp: Slot [25] registered Aug 5 22:12:16.141829 kernel: acpiphp: Slot [26] registered Aug 5 22:12:16.141904 kernel: acpiphp: Slot [27] registered Aug 5 22:12:16.141925 kernel: acpiphp: Slot [28] registered Aug 5 22:12:16.141943 kernel: acpiphp: Slot [29] registered Aug 5 22:12:16.141971 kernel: acpiphp: Slot [30] registered Aug 5 22:12:16.141988 kernel: acpiphp: Slot [31] registered Aug 5 22:12:16.142004 kernel: PCI host bridge to bus 0000:00 Aug 5 22:12:16.142176 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:12:16.142560 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:12:16.142703 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:12:16.142826 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 5 22:12:16.142948 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:12:16.143296 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:12:16.143910 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:12:16.144292 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 5 22:12:16.144444 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:12:16.144615 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:12:16.144756 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 5 22:12:16.145191 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 5 22:12:16.145343 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 5 22:12:16.145486 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 5 22:12:16.145690 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 5 22:12:16.145829 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 5 22:12:16.146158 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 5 22:12:16.146351 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Aug 5 22:12:16.146487 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 5 22:12:16.146622 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:12:16.146772 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 22:12:16.146906 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Aug 5 22:12:16.151802 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 22:12:16.152051 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Aug 5 22:12:16.152075 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:12:16.152092 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:12:16.152108 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:12:16.152136 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:12:16.152152 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:12:16.152168 kernel: iommu: Default domain type: Translated Aug 5 22:12:16.152184 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:12:16.152200 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:12:16.152216 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:12:16.152232 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:12:16.152248 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Aug 5 22:12:16.152392 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 5 22:12:16.152533 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 5 22:12:16.152668 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:12:16.152688 kernel: vgaarb: loaded Aug 5 22:12:16.152704 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 5 22:12:16.152719 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 5 22:12:16.152735 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:12:16.152751 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:12:16.152767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:12:16.152786 kernel: pnp: PnP ACPI init Aug 5 22:12:16.152802 kernel: pnp: PnP ACPI: found 5 devices Aug 5 22:12:16.152818 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:12:16.152834 kernel: NET: Registered PF_INET protocol family Aug 5 22:12:16.152849 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:12:16.152865 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 5 22:12:16.152881 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:12:16.152897 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:12:16.152913 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 5 22:12:16.152932 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 5 22:12:16.152948 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:12:16.153021 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:12:16.153037 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:12:16.153318 kernel: NET: Registered PF_XDP protocol family Aug 5 22:12:16.153497 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:12:16.153622 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:12:16.153744 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:12:16.153869 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 5 22:12:16.156148 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:12:16.156217 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:12:16.156235 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:12:16.156251 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 5 22:12:16.156265 kernel: clocksource: Switched to clocksource tsc Aug 5 22:12:16.156307 kernel: Initialise system trusted keyrings Aug 5 22:12:16.156323 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 5 22:12:16.156338 kernel: Key type asymmetric registered Aug 5 22:12:16.156359 kernel: Asymmetric key parser 'x509' registered Aug 5 22:12:16.156430 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:12:16.156447 kernel: io scheduler mq-deadline registered Aug 5 22:12:16.156547 kernel: io scheduler kyber registered Aug 5 22:12:16.156564 kernel: io scheduler bfq registered Aug 5 22:12:16.156578 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:12:16.156593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:12:16.156609 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:12:16.156623 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:12:16.156702 kernel: i8042: Warning: Keylock active Aug 5 22:12:16.156719 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:12:16.156734 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:12:16.156898 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:12:16.158003 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:12:16.160303 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:12:15 UTC (1722895935) Aug 5 22:12:16.160453 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:12:16.160481 kernel: intel_pstate: CPU model not supported Aug 5 22:12:16.160498 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:12:16.160514 kernel: Segment Routing with IPv6 Aug 5 22:12:16.160530 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:12:16.160546 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:12:16.160562 kernel: Key type dns_resolver registered Aug 5 22:12:16.160578 kernel: IPI shorthand broadcast: enabled Aug 5 22:12:16.160593 kernel: sched_clock: Marking stable (739002380, 315556089)->(1146612763, -92054294) Aug 5 22:12:16.160609 kernel: registered taskstats version 1 Aug 5 22:12:16.160625 kernel: Loading compiled-in X.509 certificates Aug 5 22:12:16.160858 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:12:16.160880 kernel: Key type .fscrypt registered Aug 5 22:12:16.160897 kernel: Key type fscrypt-provisioning registered Aug 5 22:12:16.160915 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:12:16.160931 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:12:16.160947 kernel: ima: No architecture policies found Aug 5 22:12:16.160975 kernel: clk: Disabling unused clocks Aug 5 22:12:16.160991 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:12:16.161012 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:12:16.161028 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:12:16.161044 kernel: Run /init as init process Aug 5 22:12:16.161060 kernel: with arguments: Aug 5 22:12:16.161109 kernel: /init Aug 5 22:12:16.161124 kernel: with environment: Aug 5 22:12:16.161140 kernel: HOME=/ Aug 5 22:12:16.161156 kernel: TERM=linux Aug 5 22:12:16.161171 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:12:16.161191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:12:16.163085 systemd[1]: Detected virtualization amazon. Aug 5 22:12:16.163131 systemd[1]: Detected architecture x86-64. Aug 5 22:12:16.163148 systemd[1]: Running in initrd. Aug 5 22:12:16.163164 systemd[1]: No hostname configured, using default hostname. Aug 5 22:12:16.163184 systemd[1]: Hostname set to . Aug 5 22:12:16.163201 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:12:16.163218 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:12:16.163234 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:16.163251 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:16.163270 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:12:16.163286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:12:16.163302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:12:16.163322 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:12:16.163341 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:12:16.163358 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:12:16.163374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:16.163392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:16.163408 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:12:16.163424 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:12:16.163444 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:12:16.163460 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:12:16.163477 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:12:16.163493 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:12:16.163510 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:12:16.163527 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:12:16.163543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:16.163558 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:16.163575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:16.163601 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:12:16.163666 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:12:16.163682 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:12:16.163699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:12:16.163714 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:12:16.163730 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:12:16.165037 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:12:16.165069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:12:16.165086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:16.165103 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:12:16.165157 systemd-journald[177]: Collecting audit messages is disabled. Aug 5 22:12:16.165197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:16.165215 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:12:16.165234 systemd-journald[177]: Journal started Aug 5 22:12:16.165271 systemd-journald[177]: Runtime Journal (/run/log/journal/ec239b0597cccfd85f42797f645980f6) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:12:16.161639 systemd-modules-load[179]: Inserted module 'overlay' Aug 5 22:12:16.174066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:12:16.178083 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:12:16.222205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:12:16.333328 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:12:16.333366 kernel: Bridge firewalling registered Aug 5 22:12:16.235163 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 5 22:12:16.331752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:16.342262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:12:16.344335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:16.358379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:12:16.364938 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:16.367291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:16.372532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:16.396198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:12:16.410180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:12:16.413617 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:16.433410 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:12:16.435001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:16.444500 systemd-resolved[203]: Positive Trust Anchors: Aug 5 22:12:16.444532 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:12:16.444597 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:12:16.458355 dracut-cmdline[215]: dracut-dracut-053 Aug 5 22:12:16.463116 systemd-resolved[203]: Defaulting to hostname 'linux'. Aug 5 22:12:16.467672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:12:16.478851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:16.488157 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:12:16.644984 kernel: SCSI subsystem initialized Aug 5 22:12:16.659987 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:12:16.678995 kernel: iscsi: registered transport (tcp) Aug 5 22:12:16.713992 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:12:16.714185 kernel: QLogic iSCSI HBA Driver Aug 5 22:12:16.784064 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:12:16.793184 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:12:16.859577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:12:16.859680 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:12:16.859703 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:12:16.921017 kernel: raid6: avx512x4 gen() 13630 MB/s Aug 5 22:12:16.938013 kernel: raid6: avx512x2 gen() 13568 MB/s Aug 5 22:12:16.955016 kernel: raid6: avx512x1 gen() 9684 MB/s Aug 5 22:12:16.972067 kernel: raid6: avx2x4 gen() 9708 MB/s Aug 5 22:12:16.989141 kernel: raid6: avx2x2 gen() 6490 MB/s Aug 5 22:12:17.006706 kernel: raid6: avx2x1 gen() 9427 MB/s Aug 5 22:12:17.006781 kernel: raid6: using algorithm avx512x4 gen() 13630 MB/s Aug 5 22:12:17.023996 kernel: raid6: .... xor() 5041 MB/s, rmw enabled Aug 5 22:12:17.024081 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:12:17.056030 kernel: xor: automatically using best checksumming function avx Aug 5 22:12:17.282993 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:12:17.294085 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:12:17.301419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:17.362835 systemd-udevd[398]: Using default interface naming scheme 'v255'. Aug 5 22:12:17.388593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:17.414345 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:12:17.485151 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Aug 5 22:12:17.534140 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:12:17.544292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:12:17.623497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:17.635172 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:12:17.665711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:12:17.669872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:12:17.671911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:17.674706 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:12:17.684693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:12:17.755856 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:12:17.808821 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 22:12:17.831734 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 22:12:17.832039 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 5 22:12:17.832278 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:12:17.832340 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:4a:53:aa:7f:01 Aug 5 22:12:17.838272 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:17.857555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:12:17.858148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:17.868512 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:12:17.870364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:12:17.870593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:17.881105 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:12:17.881141 kernel: AES CTR mode by8 optimization enabled Aug 5 22:12:17.872172 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:17.903510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:17.943390 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 22:12:17.945075 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 5 22:12:17.961010 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 22:12:17.963996 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:12:17.964063 kernel: GPT:9289727 != 16777215 Aug 5 22:12:17.964082 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:12:17.964100 kernel: GPT:9289727 != 16777215 Aug 5 22:12:17.964116 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:12:17.964133 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:18.105025 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (463) Aug 5 22:12:18.120981 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (450) Aug 5 22:12:18.134930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:18.143286 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:12:18.194202 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:18.221912 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 22:12:18.234077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:12:18.262216 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 22:12:18.268377 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 22:12:18.268522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 22:12:18.278213 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:12:18.289499 disk-uuid[628]: Primary Header is updated. Aug 5 22:12:18.289499 disk-uuid[628]: Secondary Entries is updated. Aug 5 22:12:18.289499 disk-uuid[628]: Secondary Header is updated. Aug 5 22:12:18.296978 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:18.299977 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:18.304023 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:19.312033 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:19.312644 disk-uuid[629]: The operation has completed successfully. Aug 5 22:12:19.510376 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:12:19.510521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:12:19.557186 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:12:19.580014 sh[972]: Success Aug 5 22:12:19.615137 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:12:19.729704 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:12:19.740117 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:12:19.753340 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:12:19.793437 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:12:19.793512 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:19.793531 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:12:19.795908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:12:19.797194 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:12:19.924995 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 22:12:19.936877 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:12:19.937810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:12:19.944289 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:12:19.946135 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:12:19.967172 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:19.967243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:19.967266 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:12:19.972982 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:12:19.986737 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:12:19.988113 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:20.003802 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:12:20.019894 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:12:20.145276 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:12:20.166206 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:12:20.212377 systemd-networkd[1164]: lo: Link UP Aug 5 22:12:20.212389 systemd-networkd[1164]: lo: Gained carrier Aug 5 22:12:20.214635 systemd-networkd[1164]: Enumeration completed Aug 5 22:12:20.214833 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:12:20.215280 systemd[1]: Reached target network.target - Network. Aug 5 22:12:20.217197 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:20.217202 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:12:20.224863 systemd-networkd[1164]: eth0: Link UP Aug 5 22:12:20.224868 systemd-networkd[1164]: eth0: Gained carrier Aug 5 22:12:20.224883 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:20.248076 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.17.118/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:12:20.528373 ignition[1088]: Ignition 2.18.0 Aug 5 22:12:20.528386 ignition[1088]: Stage: fetch-offline Aug 5 22:12:20.528788 ignition[1088]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:20.528802 ignition[1088]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:20.530597 ignition[1088]: Ignition finished successfully Aug 5 22:12:20.534440 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:12:20.542168 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:12:20.581060 ignition[1174]: Ignition 2.18.0 Aug 5 22:12:20.581074 ignition[1174]: Stage: fetch Aug 5 22:12:20.581878 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:20.581893 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:20.582072 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:20.629303 ignition[1174]: PUT result: OK Aug 5 22:12:20.634365 ignition[1174]: parsed url from cmdline: "" Aug 5 22:12:20.634376 ignition[1174]: no config URL provided Aug 5 22:12:20.634385 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:12:20.634402 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:12:20.634427 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:20.636874 ignition[1174]: PUT result: OK Aug 5 22:12:20.637016 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 22:12:20.637804 ignition[1174]: GET result: OK Aug 5 22:12:20.637867 ignition[1174]: parsing config with SHA512: 571e8d6d0f3d77847546d194ab1eb2ba5cacdfb78479091f043d6a4bf4ea6dc3c20e5fadc5dd7ea46e90f17092f39d0dec20c7d3a591dd334ddc9d9cff4143f7 Aug 5 22:12:20.677521 unknown[1174]: fetched base config from "system" Aug 5 22:12:20.677538 unknown[1174]: fetched base config from "system" Aug 5 22:12:20.677547 unknown[1174]: fetched user config from "aws" Aug 5 22:12:20.681627 ignition[1174]: fetch: fetch complete Aug 5 22:12:20.681636 ignition[1174]: fetch: fetch passed Aug 5 22:12:20.681714 ignition[1174]: Ignition finished successfully Aug 5 22:12:20.689827 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:12:20.701178 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:12:20.729947 ignition[1181]: Ignition 2.18.0 Aug 5 22:12:20.729981 ignition[1181]: Stage: kargs Aug 5 22:12:20.730517 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:20.730539 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:20.730664 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:20.732319 ignition[1181]: PUT result: OK Aug 5 22:12:20.740897 ignition[1181]: kargs: kargs passed Aug 5 22:12:20.740998 ignition[1181]: Ignition finished successfully Aug 5 22:12:20.743687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:12:20.749834 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:12:20.773035 ignition[1188]: Ignition 2.18.0 Aug 5 22:12:20.773048 ignition[1188]: Stage: disks Aug 5 22:12:20.773506 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:20.773520 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:20.773624 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:20.777661 ignition[1188]: PUT result: OK Aug 5 22:12:20.782692 ignition[1188]: disks: disks passed Aug 5 22:12:20.782772 ignition[1188]: Ignition finished successfully Aug 5 22:12:20.785299 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:12:20.787596 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:12:20.792424 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:12:20.794646 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:12:20.794740 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:12:20.797689 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:12:20.805169 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:12:20.852788 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:12:20.865332 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:12:20.876149 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:12:21.070987 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:12:21.071263 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:12:21.074087 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:12:21.117569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:12:21.122658 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:12:21.125480 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:12:21.125676 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:12:21.137990 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1216) Aug 5 22:12:21.125715 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:12:21.142119 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:21.142147 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:21.142159 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:12:21.150187 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:12:21.150081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:12:21.157019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:12:21.164200 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:12:21.311104 systemd-networkd[1164]: eth0: Gained IPv6LL Aug 5 22:12:21.543863 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:12:21.550987 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:12:21.558234 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:12:21.564783 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:12:21.875085 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:12:21.894021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:12:21.910674 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:12:21.945987 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:21.946097 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:12:21.978212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:12:21.986548 ignition[1329]: INFO : Ignition 2.18.0 Aug 5 22:12:21.986548 ignition[1329]: INFO : Stage: mount Aug 5 22:12:21.988618 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:21.988618 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:21.988618 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:21.992565 ignition[1329]: INFO : PUT result: OK Aug 5 22:12:21.995083 ignition[1329]: INFO : mount: mount passed Aug 5 22:12:21.996200 ignition[1329]: INFO : Ignition finished successfully Aug 5 22:12:21.999030 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:12:22.012740 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:12:22.082233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:12:22.104061 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1341) Aug 5 22:12:22.106991 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:22.107060 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:22.110849 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:12:22.117420 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:12:22.121414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:12:22.184700 ignition[1358]: INFO : Ignition 2.18.0 Aug 5 22:12:22.184700 ignition[1358]: INFO : Stage: files Aug 5 22:12:22.187382 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:22.187382 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:22.190061 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:22.192314 ignition[1358]: INFO : PUT result: OK Aug 5 22:12:22.195299 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:12:22.197157 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:12:22.197157 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:12:22.233482 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:12:22.237046 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:12:22.242050 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:12:22.240395 unknown[1358]: wrote ssh authorized keys file for user: core Aug 5 22:12:22.250813 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:12:22.250813 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:12:22.325987 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:12:22.492359 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:12:22.492359 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:12:22.498188 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:12:22.498188 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:12:22.504086 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:12:22.504086 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:12:22.504086 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:12:22.504086 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:12:22.504086 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:12:22.524004 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:12:22.524004 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:12:22.524004 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:12:22.524004 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:12:22.524004 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:12:22.524004 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Aug 5 22:12:23.319177 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:12:24.661487 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:12:24.666200 ignition[1358]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:12:24.680519 ignition[1358]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:12:24.684145 ignition[1358]: INFO : files: files passed Aug 5 22:12:24.684145 ignition[1358]: INFO : Ignition finished successfully Aug 5 22:12:24.696515 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:12:24.709439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:12:24.725065 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:12:24.734057 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:12:24.734597 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:12:24.748291 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:12:24.748291 initrd-setup-root-after-ignition[1387]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:12:24.754069 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:12:24.757865 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:12:24.763290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:12:24.773228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:12:24.817014 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:12:24.817145 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:12:24.827499 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:12:24.829451 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:12:24.833497 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:12:24.841436 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:12:24.867408 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:12:24.877505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:12:24.900014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:24.909715 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:24.913069 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:12:24.914543 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:12:24.914877 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:12:24.920946 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:12:24.924413 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:12:24.927847 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:12:24.930799 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:12:24.933970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:12:24.937101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:12:24.941508 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:12:24.945220 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:12:24.948725 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:12:24.955778 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:12:24.967162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:12:24.969907 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:12:24.972698 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:24.974185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:24.976522 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:12:24.977594 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:24.980027 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:12:24.981271 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:12:24.989424 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:12:24.989728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:12:25.003247 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:12:25.003625 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:12:25.011203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:12:25.029573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:12:25.033720 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:12:25.034540 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:25.036611 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:12:25.036765 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:12:25.053344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:12:25.053768 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:12:25.067989 ignition[1411]: INFO : Ignition 2.18.0 Aug 5 22:12:25.067989 ignition[1411]: INFO : Stage: umount Aug 5 22:12:25.070345 ignition[1411]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:25.070345 ignition[1411]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:25.070345 ignition[1411]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:25.070345 ignition[1411]: INFO : PUT result: OK Aug 5 22:12:25.092361 ignition[1411]: INFO : umount: umount passed Aug 5 22:12:25.093727 ignition[1411]: INFO : Ignition finished successfully Aug 5 22:12:25.095323 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:12:25.097716 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:12:25.107797 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:12:25.107871 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:12:25.118303 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:12:25.118388 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:12:25.120389 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:12:25.120440 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:12:25.121605 systemd[1]: Stopped target network.target - Network. Aug 5 22:12:25.123353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:12:25.123491 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:12:25.129227 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:12:25.131176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:12:25.133002 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:25.140929 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:12:25.143683 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:12:25.146032 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:12:25.146100 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:12:25.156423 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:12:25.156506 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:12:25.163913 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:12:25.164013 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:12:25.166734 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:12:25.166821 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:12:25.171846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:12:25.179490 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:12:25.197922 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:12:25.212178 systemd-networkd[1164]: eth0: DHCPv6 lease lost Aug 5 22:12:25.220922 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:12:25.222337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:12:25.244197 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:12:25.244363 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:12:25.253643 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:12:25.253932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:25.261486 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:12:25.263699 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:12:25.263863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:12:25.265757 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:12:25.265830 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:25.267360 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:12:25.267850 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:25.269363 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:12:25.269634 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:25.272132 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:25.279632 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:12:25.279914 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:12:25.306162 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:12:25.306253 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:12:25.316439 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:12:25.318086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:25.324410 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:12:25.324489 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:25.332451 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:12:25.332506 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:25.333022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:12:25.333086 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:12:25.333616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:12:25.333664 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:12:25.334265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:12:25.334320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:25.369336 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:12:25.369502 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:12:25.369572 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:25.390194 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:12:25.390324 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:25.397131 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:12:25.397213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:25.411108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:12:25.411192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:25.437829 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:12:25.438008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:12:25.443465 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:12:25.443595 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:12:25.446275 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:12:25.452322 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:12:25.477868 systemd[1]: Switching root. Aug 5 22:12:25.526681 systemd-journald[177]: Journal stopped Aug 5 22:12:28.540904 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Aug 5 22:12:28.543063 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:12:28.543103 kernel: SELinux: policy capability open_perms=1 Aug 5 22:12:28.543126 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:12:28.543152 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:12:28.543172 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:12:28.543199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:12:28.543225 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:12:28.543245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:12:28.543271 kernel: audit: type=1403 audit(1722895946.708:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:12:28.543295 systemd[1]: Successfully loaded SELinux policy in 103.417ms. Aug 5 22:12:28.543326 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.558ms. Aug 5 22:12:28.543351 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:12:28.543373 systemd[1]: Detected virtualization amazon. Aug 5 22:12:28.543396 systemd[1]: Detected architecture x86-64. Aug 5 22:12:28.543418 systemd[1]: Detected first boot. Aug 5 22:12:28.543447 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:12:28.543467 zram_generator::config[1453]: No configuration found. Aug 5 22:12:28.543497 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:12:28.543610 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:12:28.543634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:12:28.543653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:12:28.543741 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:12:28.543765 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:12:28.543789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:12:28.543809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:12:28.543830 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:12:28.543851 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:12:28.543872 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:12:28.543897 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:12:28.543917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:28.543939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:28.548081 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:12:28.548127 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:12:28.548148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:12:28.548168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:12:28.548186 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:12:28.548205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:28.548227 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:12:28.548245 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:12:28.548265 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:12:28.548286 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:12:28.548303 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:28.548322 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:12:28.548341 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:12:28.548359 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:12:28.548378 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:12:28.548398 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:12:28.548417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:28.548440 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:28.548459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:28.548478 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:12:28.548496 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:12:28.548514 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:12:28.548533 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:12:28.548552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:28.548571 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:12:28.548590 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:12:28.548611 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:12:28.548631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:12:28.548649 systemd[1]: Reached target machines.target - Containers. Aug 5 22:12:28.548668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:12:28.548686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:28.548704 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:12:28.548722 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:12:28.548740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:28.548758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:12:28.548780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:28.548799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:12:28.548817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:28.548836 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:12:28.548856 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:12:28.548874 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:12:28.548892 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:12:28.548910 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:12:28.548931 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:12:28.548949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:12:28.548984 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:12:28.549002 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:12:28.549021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:12:28.549039 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:12:28.549058 systemd[1]: Stopped verity-setup.service. Aug 5 22:12:28.549078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:28.549097 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:12:28.549120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:12:28.549138 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:12:28.549156 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:12:28.549175 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:12:28.549197 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:12:28.549216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:28.549233 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:12:28.549251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:12:28.549305 systemd-journald[1531]: Collecting audit messages is disabled. Aug 5 22:12:28.549344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:28.549362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:28.549381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:28.549403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:28.549422 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:12:28.549440 kernel: fuse: init (API version 7.39) Aug 5 22:12:28.549462 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:12:28.549480 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:12:28.549501 systemd-journald[1531]: Journal started Aug 5 22:12:28.549536 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec239b0597cccfd85f42797f645980f6) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:12:27.906196 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:12:27.976391 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 22:12:27.976842 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:12:28.558111 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:12:28.558191 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:12:28.587985 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:12:28.588074 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:12:28.588102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:28.592994 kernel: loop: module loaded Aug 5 22:12:28.606084 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:12:28.610028 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:12:28.622001 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:12:28.634021 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:12:28.649995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:12:28.656987 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:12:28.663691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:12:28.665472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:12:28.667368 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:28.667586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:28.670558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:28.672369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:12:28.675499 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:12:28.686465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:12:28.694752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:28.711655 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:12:28.717061 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:12:28.718629 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:12:28.729163 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:12:28.743735 kernel: loop0: detected capacity change from 0 to 139904 Aug 5 22:12:28.743822 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:12:28.741291 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:12:28.755227 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:12:28.757792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:12:28.765197 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:12:28.777258 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:12:28.781020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:12:28.783322 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:12:28.810065 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec239b0597cccfd85f42797f645980f6 is 192.645ms for 964 entries. Aug 5 22:12:28.810065 systemd-journald[1531]: System Journal (/var/log/journal/ec239b0597cccfd85f42797f645980f6) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:12:29.025099 systemd-journald[1531]: Received client request to flush runtime journal. Aug 5 22:12:29.025212 kernel: ACPI: bus type drm_connector registered Aug 5 22:12:29.025260 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:12:28.833376 udevadm[1584]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:12:28.844534 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:12:28.846781 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:12:28.873399 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Aug 5 22:12:28.873501 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Aug 5 22:12:28.884383 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:12:28.889522 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:12:28.930993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:28.958108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:28.975209 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:12:29.031199 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:12:29.057219 kernel: loop1: detected capacity change from 0 to 60984 Aug 5 22:12:29.125052 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:12:29.137282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:12:29.187300 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Aug 5 22:12:29.187328 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Aug 5 22:12:29.192040 kernel: loop2: detected capacity change from 0 to 80568 Aug 5 22:12:29.202379 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:29.333098 kernel: loop3: detected capacity change from 0 to 210664 Aug 5 22:12:29.418780 kernel: loop4: detected capacity change from 0 to 139904 Aug 5 22:12:29.448999 kernel: loop5: detected capacity change from 0 to 60984 Aug 5 22:12:29.468448 kernel: loop6: detected capacity change from 0 to 80568 Aug 5 22:12:29.488985 kernel: loop7: detected capacity change from 0 to 210664 Aug 5 22:12:29.513802 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 22:12:29.515094 (sd-merge)[1607]: Merged extensions into '/usr'. Aug 5 22:12:29.527334 systemd[1]: Reloading requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:12:29.527504 systemd[1]: Reloading... Aug 5 22:12:29.653013 zram_generator::config[1628]: No configuration found. Aug 5 22:12:30.109184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:30.242690 systemd[1]: Reloading finished in 714 ms. Aug 5 22:12:30.277530 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:12:30.289268 systemd[1]: Starting ensure-sysext.service... Aug 5 22:12:30.299890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:12:30.327565 systemd[1]: Reloading requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:12:30.327585 systemd[1]: Reloading... Aug 5 22:12:30.345281 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:12:30.345836 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:12:30.346770 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:12:30.347227 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Aug 5 22:12:30.347309 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Aug 5 22:12:30.376347 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:12:30.376363 systemd-tmpfiles[1680]: Skipping /boot Aug 5 22:12:30.455069 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:12:30.455089 systemd-tmpfiles[1680]: Skipping /boot Aug 5 22:12:30.471554 zram_generator::config[1706]: No configuration found. Aug 5 22:12:30.688869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:30.708756 ldconfig[1549]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:12:30.754556 systemd[1]: Reloading finished in 426 ms. Aug 5 22:12:30.776074 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:12:30.777852 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:12:30.785477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:30.801493 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:30.806317 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:12:30.820339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:12:30.825348 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:12:30.835360 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:30.838467 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:12:30.845563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:30.845890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:30.854317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:30.864277 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:30.869348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:30.870713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:30.870911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:30.876273 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:30.876600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:30.876856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:30.877028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:30.882402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:30.882757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:30.892935 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:12:30.894753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:30.894985 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:12:30.896233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:30.902549 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:12:30.953596 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:12:30.970482 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:12:30.974171 systemd[1]: Finished ensure-sysext.service. Aug 5 22:12:30.975810 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:12:30.976026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:12:30.989099 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:12:30.995605 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:30.996622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:31.001808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:31.003028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:31.005050 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:31.005415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:31.008639 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:12:31.011377 systemd-udevd[1769]: Using default interface naming scheme 'v255'. Aug 5 22:12:31.014877 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:12:31.015828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:12:31.051049 augenrules[1793]: No rules Aug 5 22:12:31.052375 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:31.064575 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:12:31.066595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:12:31.071736 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:31.073827 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:12:31.084387 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:12:31.200989 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1813) Aug 5 22:12:31.201634 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:12:31.206846 (udev-worker)[1815]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:31.270727 systemd-networkd[1807]: lo: Link UP Aug 5 22:12:31.272008 systemd-networkd[1807]: lo: Gained carrier Aug 5 22:12:31.275865 systemd-networkd[1807]: Enumeration completed Aug 5 22:12:31.277446 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:12:31.280447 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:31.280458 systemd-networkd[1807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:12:31.285234 systemd-resolved[1765]: Positive Trust Anchors: Aug 5 22:12:31.286170 systemd-networkd[1807]: eth0: Link UP Aug 5 22:12:31.286432 systemd-networkd[1807]: eth0: Gained carrier Aug 5 22:12:31.286463 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:31.286952 systemd-resolved[1765]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:12:31.287137 systemd-resolved[1765]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:12:31.287743 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:12:31.298364 systemd-resolved[1765]: Defaulting to hostname 'linux'. Aug 5 22:12:31.299039 systemd-networkd[1807]: eth0: DHCPv4 address 172.31.17.118/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:12:31.301921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:12:31.303381 systemd[1]: Reached target network.target - Network. Aug 5 22:12:31.304287 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:31.327478 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:31.361097 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Aug 5 22:12:31.367179 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1808) Aug 5 22:12:31.372986 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:12:31.381986 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 5 22:12:31.391992 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:12:31.392149 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Aug 5 22:12:31.393357 kernel: ACPI: button: Sleep Button [SLPF] Aug 5 22:12:31.501698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:31.533988 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:12:31.579996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:12:31.585220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:12:31.586880 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:12:31.597229 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:12:31.623833 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:12:31.624225 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:12:31.657782 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:12:31.659825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:31.770570 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:12:31.775955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:31.780854 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:12:31.786727 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:12:31.787493 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:12:31.789378 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:12:31.791654 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:12:31.793535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:12:31.795603 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:12:31.801099 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:12:31.801151 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:12:31.803849 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:12:31.808894 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:12:31.813051 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:12:31.832937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:12:31.838619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:12:31.841521 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:12:31.849289 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:12:31.850524 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:12:31.851802 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:12:31.851848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:12:31.863006 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:12:31.874205 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:12:31.883383 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:12:31.892123 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:12:31.895361 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:12:31.896715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:12:31.910255 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:12:31.958730 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 22:12:31.971209 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:12:31.978169 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 22:12:31.990337 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:12:32.003233 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:12:32.010625 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:12:32.025272 jq[1935]: false Aug 5 22:12:32.014727 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:12:32.015475 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:12:32.024383 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:12:32.040102 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:12:32.055208 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:12:32.056229 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:12:32.066769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:12:32.068061 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:12:32.076049 dbus-daemon[1934]: [system] SELinux support is enabled Aug 5 22:12:32.095854 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:12:32.125440 dbus-daemon[1934]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1807 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 22:12:32.138856 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:12:32.138919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:12:32.140585 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:12:32.140625 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:12:32.174803 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 22:12:32.184454 extend-filesystems[1936]: Found loop4 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found loop5 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found loop6 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found loop7 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p1 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p2 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p3 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found usr Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p4 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p6 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p7 Aug 5 22:12:32.184454 extend-filesystems[1936]: Found nvme0n1p9 Aug 5 22:12:32.184454 extend-filesystems[1936]: Checking size of /dev/nvme0n1p9 Aug 5 22:12:32.235819 jq[1948]: true Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: ---------------------------------------------------- Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: corporation. Support and training for ntp-4 are Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: available at https://www.nwtime.org/support Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: ---------------------------------------------------- Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: proto: precision = 0.088 usec (-23) Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: basedate set to 2024-07-24 Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: gps base set to 2024-07-28 (week 2325) Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:12:32.235940 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:12:32.233154 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 22:12:32.203207 ntpd[1938]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Listen normally on 3 eth0 172.31.17.118:123 Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Listen normally on 4 lo [::1]:123 Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: bind(21) AF_INET6 fe80::44a:53ff:feaa:7f01%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: unable to create socket on eth0 (5) for fe80::44a:53ff:feaa:7f01%2#123 Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: failed to init interface for address fe80::44a:53ff:feaa:7f01%2 Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: Listening on routing socket on fd #21 for interface updates Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:32.302190 ntpd[1938]: 5 Aug 22:12:32 ntpd[1938]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:32.275773 (ntainerd)[1970]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:12:32.302941 extend-filesystems[1936]: Resized partition /dev/nvme0n1p9 Aug 5 22:12:32.306309 tar[1952]: linux-amd64/helm Aug 5 22:12:32.306778 update_engine[1947]: I0805 22:12:32.258921 1947 main.cc:92] Flatcar Update Engine starting Aug 5 22:12:32.203236 ntpd[1938]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:12:32.302377 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:12:32.203248 ntpd[1938]: ---------------------------------------------------- Aug 5 22:12:32.304300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:12:32.203258 ntpd[1938]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:12:32.203267 ntpd[1938]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:12:32.203277 ntpd[1938]: corporation. Support and training for ntp-4 are Aug 5 22:12:32.203287 ntpd[1938]: available at https://www.nwtime.org/support Aug 5 22:12:32.203296 ntpd[1938]: ---------------------------------------------------- Aug 5 22:12:32.210844 ntpd[1938]: proto: precision = 0.088 usec (-23) Aug 5 22:12:32.214168 ntpd[1938]: basedate set to 2024-07-24 Aug 5 22:12:32.214190 ntpd[1938]: gps base set to 2024-07-28 (week 2325) Aug 5 22:12:32.235223 ntpd[1938]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:12:32.235292 ntpd[1938]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:12:32.241212 ntpd[1938]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:12:32.241258 ntpd[1938]: Listen normally on 3 eth0 172.31.17.118:123 Aug 5 22:12:32.241300 ntpd[1938]: Listen normally on 4 lo [::1]:123 Aug 5 22:12:32.241352 ntpd[1938]: bind(21) AF_INET6 fe80::44a:53ff:feaa:7f01%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:12:32.241375 ntpd[1938]: unable to create socket on eth0 (5) for fe80::44a:53ff:feaa:7f01%2#123 Aug 5 22:12:32.241391 ntpd[1938]: failed to init interface for address fe80::44a:53ff:feaa:7f01%2 Aug 5 22:12:32.241426 ntpd[1938]: Listening on routing socket on fd #21 for interface updates Aug 5 22:12:32.266395 ntpd[1938]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:32.266490 ntpd[1938]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:32.315054 extend-filesystems[1982]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:12:32.328992 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 22:12:32.333937 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:12:32.334642 update_engine[1947]: I0805 22:12:32.334012 1947 update_check_scheduler.cc:74] Next update check in 6m51s Aug 5 22:12:32.351224 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:12:32.383622 jq[1968]: true Aug 5 22:12:32.428263 systemd-logind[1946]: Watching system buttons on /dev/input/event2 (Power Button) Aug 5 22:12:32.428318 systemd-logind[1946]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 5 22:12:32.428346 systemd-logind[1946]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:12:32.430107 systemd-logind[1946]: New seat seat0. Aug 5 22:12:32.434004 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 22:12:32.436131 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 22:12:32.437685 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:12:32.490529 extend-filesystems[1982]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 22:12:32.490529 extend-filesystems[1982]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:12:32.490529 extend-filesystems[1982]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 22:12:32.495009 extend-filesystems[1936]: Resized filesystem in /dev/nvme0n1p9 Aug 5 22:12:32.497622 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:12:32.497867 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:12:32.504372 coreos-metadata[1933]: Aug 05 22:12:32.503 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:12:32.507174 coreos-metadata[1933]: Aug 05 22:12:32.506 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 22:12:32.514173 coreos-metadata[1933]: Aug 05 22:12:32.514 INFO Fetch successful Aug 5 22:12:32.514287 coreos-metadata[1933]: Aug 05 22:12:32.514 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 22:12:32.517457 coreos-metadata[1933]: Aug 05 22:12:32.516 INFO Fetch successful Aug 5 22:12:32.517457 coreos-metadata[1933]: Aug 05 22:12:32.516 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 22:12:32.517457 coreos-metadata[1933]: Aug 05 22:12:32.517 INFO Fetch successful Aug 5 22:12:32.517457 coreos-metadata[1933]: Aug 05 22:12:32.517 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 22:12:32.517804 coreos-metadata[1933]: Aug 05 22:12:32.517 INFO Fetch successful Aug 5 22:12:32.517868 coreos-metadata[1933]: Aug 05 22:12:32.517 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 22:12:32.518980 coreos-metadata[1933]: Aug 05 22:12:32.518 INFO Fetch failed with 404: resource not found Aug 5 22:12:32.518980 coreos-metadata[1933]: Aug 05 22:12:32.518 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 22:12:32.528408 coreos-metadata[1933]: Aug 05 22:12:32.520 INFO Fetch successful Aug 5 22:12:32.528408 coreos-metadata[1933]: Aug 05 22:12:32.520 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 22:12:32.533261 coreos-metadata[1933]: Aug 05 22:12:32.529 INFO Fetch successful Aug 5 22:12:32.533261 coreos-metadata[1933]: Aug 05 22:12:32.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 22:12:32.547289 coreos-metadata[1933]: Aug 05 22:12:32.540 INFO Fetch successful Aug 5 22:12:32.547289 coreos-metadata[1933]: Aug 05 22:12:32.540 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 22:12:32.547289 coreos-metadata[1933]: Aug 05 22:12:32.547 INFO Fetch successful Aug 5 22:12:32.547289 coreos-metadata[1933]: Aug 05 22:12:32.547 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 22:12:32.550738 coreos-metadata[1933]: Aug 05 22:12:32.550 INFO Fetch successful Aug 5 22:12:32.595677 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1808) Aug 5 22:12:32.661793 bash[2016]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:12:32.679121 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:12:32.705785 systemd[1]: Starting sshkeys.service... Aug 5 22:12:32.710013 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:12:32.719294 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:12:32.770651 locksmithd[1984]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:12:32.793851 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:12:32.805514 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:12:32.870769 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 22:12:32.871184 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 22:12:32.895478 dbus-daemon[1934]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1976 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 22:12:32.910775 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 22:12:33.023731 polkitd[2065]: Started polkitd version 121 Aug 5 22:12:33.071790 polkitd[2065]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 22:12:33.071889 polkitd[2065]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 22:12:33.079740 polkitd[2065]: Finished loading, compiling and executing 2 rules Aug 5 22:12:33.088747 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 22:12:33.089045 systemd-networkd[1807]: eth0: Gained IPv6LL Aug 5 22:12:33.090431 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 22:12:33.093551 polkitd[2065]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 22:12:33.112213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:12:33.119082 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:12:33.139656 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 22:12:33.154500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:33.167896 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:12:33.285522 systemd-hostnamed[1976]: Hostname set to (transient) Aug 5 22:12:33.287215 systemd-resolved[1765]: System hostname changed to 'ip-172-31-17-118'. Aug 5 22:12:33.384297 coreos-metadata[2047]: Aug 05 22:12:33.381 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:12:33.388211 coreos-metadata[2047]: Aug 05 22:12:33.388 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 22:12:33.394723 coreos-metadata[2047]: Aug 05 22:12:33.393 INFO Fetch successful Aug 5 22:12:33.394723 coreos-metadata[2047]: Aug 05 22:12:33.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 22:12:33.409126 coreos-metadata[2047]: Aug 05 22:12:33.400 INFO Fetch successful Aug 5 22:12:33.413928 unknown[2047]: wrote ssh authorized keys file for user: core Aug 5 22:12:33.499587 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:12:33.588391 update-ssh-keys[2137]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:12:33.591399 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:12:33.604649 systemd[1]: Finished sshkeys.service. Aug 5 22:12:33.615087 sshd_keygen[1975]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:12:33.742333 amazon-ssm-agent[2114]: Initializing new seelog logger Aug 5 22:12:33.742760 amazon-ssm-agent[2114]: New Seelog Logger Creation Complete Aug 5 22:12:33.742803 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.742803 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.747979 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 processing appconfig overrides Aug 5 22:12:33.750993 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.750993 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.751193 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 processing appconfig overrides Aug 5 22:12:33.751537 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.751537 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.751644 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 processing appconfig overrides Aug 5 22:12:33.752409 amazon-ssm-agent[2114]: 2024-08-05 22:12:33 INFO Proxy environment variables: Aug 5 22:12:33.760802 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.760802 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:33.760802 amazon-ssm-agent[2114]: 2024/08/05 22:12:33 processing appconfig overrides Aug 5 22:12:33.773125 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:12:33.785393 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:12:33.853499 amazon-ssm-agent[2114]: 2024-08-05 22:12:33 INFO no_proxy: Aug 5 22:12:33.855501 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:12:33.855857 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:12:33.867096 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:12:33.895290 containerd[1970]: time="2024-08-05T22:12:33.895182069Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:12:33.918733 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:12:33.933526 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:12:33.943887 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:12:33.945760 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:12:33.966855 amazon-ssm-agent[2114]: 2024-08-05 22:12:33 INFO https_proxy: Aug 5 22:12:34.062649 containerd[1970]: time="2024-08-05T22:12:34.061829945Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:12:34.062649 containerd[1970]: time="2024-08-05T22:12:34.062035455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.084444 amazon-ssm-agent[2114]: 2024-08-05 22:12:33 INFO http_proxy: Aug 5 22:12:34.086880 containerd[1970]: time="2024-08-05T22:12:34.086817981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:34.086880 containerd[1970]: time="2024-08-05T22:12:34.086877474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.087236 containerd[1970]: time="2024-08-05T22:12:34.087201569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:34.087296 containerd[1970]: time="2024-08-05T22:12:34.087237517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:12:34.095240 containerd[1970]: time="2024-08-05T22:12:34.095047145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098024 containerd[1970]: time="2024-08-05T22:12:34.095661066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098024 containerd[1970]: time="2024-08-05T22:12:34.096248093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098024 containerd[1970]: time="2024-08-05T22:12:34.096944683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098024 containerd[1970]: time="2024-08-05T22:12:34.097844820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098024 containerd[1970]: time="2024-08-05T22:12:34.097875017Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:12:34.098024 containerd[1970]: time="2024-08-05T22:12:34.097890814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098293 containerd[1970]: time="2024-08-05T22:12:34.098088184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:34.098293 containerd[1970]: time="2024-08-05T22:12:34.098109584Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:12:34.098293 containerd[1970]: time="2024-08-05T22:12:34.098182607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:12:34.098293 containerd[1970]: time="2024-08-05T22:12:34.098197973Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.116630783Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.116712319Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.116735541Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.119237451Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.119616415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.119643666Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.119664701Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120168703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120194496Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120230999Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120254065Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120351130Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120380885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.120990 containerd[1970]: time="2024-08-05T22:12:34.120402924Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.120438319Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.120461383Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.120494879Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.120515741Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.120534393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.120805500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:12:34.121553 containerd[1970]: time="2024-08-05T22:12:34.121537573Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121584686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121607651Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121644124Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121711898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121731391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121810993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121828462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.121867 containerd[1970]: time="2024-08-05T22:12:34.121858588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.121878364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.121898866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.121917516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.121937415Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.122162818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.122189228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.122210755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.122555 containerd[1970]: time="2024-08-05T22:12:34.122491947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.123045 containerd[1970]: time="2024-08-05T22:12:34.122557959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.123045 containerd[1970]: time="2024-08-05T22:12:34.122581902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.123045 containerd[1970]: time="2024-08-05T22:12:34.122600277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.123045 containerd[1970]: time="2024-08-05T22:12:34.122618322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:12:34.123243 containerd[1970]: time="2024-08-05T22:12:34.123159692Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:12:34.123632 containerd[1970]: time="2024-08-05T22:12:34.123253802Z" level=info msg="Connect containerd service" Aug 5 22:12:34.123632 containerd[1970]: time="2024-08-05T22:12:34.123308379Z" level=info msg="using legacy CRI server" Aug 5 22:12:34.123632 containerd[1970]: time="2024-08-05T22:12:34.123382851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:12:34.123817 containerd[1970]: time="2024-08-05T22:12:34.123645449Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126397452Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126703646Z" level=info msg="Start subscribing containerd event" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126766015Z" level=info msg="Start recovering state" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126849949Z" level=info msg="Start event monitor" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126915703Z" level=info msg="Start snapshots syncer" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126929769Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:12:34.127868 containerd[1970]: time="2024-08-05T22:12:34.126941880Z" level=info msg="Start streaming server" Aug 5 22:12:34.128978 containerd[1970]: time="2024-08-05T22:12:34.128197561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:12:34.128978 containerd[1970]: time="2024-08-05T22:12:34.128235231Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:12:34.128978 containerd[1970]: time="2024-08-05T22:12:34.128254121Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:12:34.128978 containerd[1970]: time="2024-08-05T22:12:34.128274377Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:12:34.128978 containerd[1970]: time="2024-08-05T22:12:34.128519337Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:12:34.128978 containerd[1970]: time="2024-08-05T22:12:34.128567163Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:12:34.133458 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:12:34.135551 containerd[1970]: time="2024-08-05T22:12:34.133607812Z" level=info msg="containerd successfully booted in 0.248939s" Aug 5 22:12:34.186049 amazon-ssm-agent[2114]: 2024-08-05 22:12:33 INFO Checking if agent identity type OnPrem can be assumed Aug 5 22:12:34.284453 amazon-ssm-agent[2114]: 2024-08-05 22:12:33 INFO Checking if agent identity type EC2 can be assumed Aug 5 22:12:34.387067 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO Agent will take identity from EC2 Aug 5 22:12:34.492313 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:34.598040 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:34.697585 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:34.796935 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 22:12:34.826849 tar[1952]: linux-amd64/LICENSE Aug 5 22:12:34.827528 tar[1952]: linux-amd64/README.md Aug 5 22:12:34.848170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:12:34.884739 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 5 22:12:34.885356 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 22:12:34.885356 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 22:12:34.885356 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [Registrar] Starting registrar module Aug 5 22:12:34.885491 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 22:12:34.885491 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [EC2Identity] EC2 registration was successful. Aug 5 22:12:34.885491 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [CredentialRefresher] credentialRefresher has started Aug 5 22:12:34.885491 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 22:12:34.885491 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 22:12:34.896529 amazon-ssm-agent[2114]: 2024-08-05 22:12:34 INFO [CredentialRefresher] Next credential rotation will be in 30.349983313016665 minutes Aug 5 22:12:35.108471 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:12:35.123847 systemd[1]: Started sshd@0-172.31.17.118:22-139.178.89.65:42250.service - OpenSSH per-connection server daemon (139.178.89.65:42250). Aug 5 22:12:35.203746 ntpd[1938]: Listen normally on 6 eth0 [fe80::44a:53ff:feaa:7f01%2]:123 Aug 5 22:12:35.209381 ntpd[1938]: 5 Aug 22:12:35 ntpd[1938]: Listen normally on 6 eth0 [fe80::44a:53ff:feaa:7f01%2]:123 Aug 5 22:12:35.409718 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 42250 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:35.411253 sshd[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:35.427240 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:12:35.437324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:12:35.445930 systemd-logind[1946]: New session 1 of user core. Aug 5 22:12:35.469595 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:12:35.480475 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:12:35.496751 (systemd)[2183]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:35.678154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:35.682849 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:12:35.694769 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:35.732217 systemd[2183]: Queued start job for default target default.target. Aug 5 22:12:35.744125 systemd[2183]: Created slice app.slice - User Application Slice. Aug 5 22:12:35.744180 systemd[2183]: Reached target paths.target - Paths. Aug 5 22:12:35.744200 systemd[2183]: Reached target timers.target - Timers. Aug 5 22:12:35.747126 systemd[2183]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:12:35.763001 systemd[2183]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:12:35.764016 systemd[2183]: Reached target sockets.target - Sockets. Aug 5 22:12:35.764040 systemd[2183]: Reached target basic.target - Basic System. Aug 5 22:12:35.764104 systemd[2183]: Reached target default.target - Main User Target. Aug 5 22:12:35.764142 systemd[2183]: Startup finished in 256ms. Aug 5 22:12:35.764325 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:12:35.771288 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:12:35.772929 systemd[1]: Startup finished in 887ms (kernel) + 10.936s (initrd) + 9.164s (userspace) = 20.987s. Aug 5 22:12:35.944704 systemd[1]: Started sshd@1-172.31.17.118:22-139.178.89.65:42266.service - OpenSSH per-connection server daemon (139.178.89.65:42266). Aug 5 22:12:35.999063 amazon-ssm-agent[2114]: 2024-08-05 22:12:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 22:12:36.095999 amazon-ssm-agent[2114]: 2024-08-05 22:12:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2207) started Aug 5 22:12:36.176059 sshd[2205]: Accepted publickey for core from 139.178.89.65 port 42266 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:36.178250 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:36.199161 amazon-ssm-agent[2114]: 2024-08-05 22:12:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 22:12:36.199315 systemd-logind[1946]: New session 2 of user core. Aug 5 22:12:36.203224 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:12:36.346342 sshd[2205]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:36.358290 systemd[1]: sshd@1-172.31.17.118:22-139.178.89.65:42266.service: Deactivated successfully. Aug 5 22:12:36.364772 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:12:36.375744 systemd-logind[1946]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:12:36.433752 systemd[1]: Started sshd@2-172.31.17.118:22-139.178.89.65:42274.service - OpenSSH per-connection server daemon (139.178.89.65:42274). Aug 5 22:12:36.441673 systemd-logind[1946]: Removed session 2. Aug 5 22:12:36.651775 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 42274 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:36.657029 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:36.668135 systemd-logind[1946]: New session 3 of user core. Aug 5 22:12:36.674205 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:12:36.825820 sshd[2226]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:36.862166 systemd[1]: sshd@2-172.31.17.118:22-139.178.89.65:42274.service: Deactivated successfully. Aug 5 22:12:36.877903 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:12:36.883045 systemd-logind[1946]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:12:36.893455 systemd[1]: Started sshd@3-172.31.17.118:22-139.178.89.65:42278.service - OpenSSH per-connection server daemon (139.178.89.65:42278). Aug 5 22:12:36.898496 systemd-logind[1946]: Removed session 3. Aug 5 22:12:37.113140 sshd[2235]: Accepted publickey for core from 139.178.89.65 port 42278 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:37.119037 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:37.175860 systemd-logind[1946]: New session 4 of user core. Aug 5 22:12:37.196864 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:12:37.297729 kubelet[2194]: E0805 22:12:37.297672 2194 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:37.300682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:37.301255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:37.302015 systemd[1]: kubelet.service: Consumed 1.125s CPU time. Aug 5 22:12:37.360488 sshd[2235]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:37.374477 systemd[1]: sshd@3-172.31.17.118:22-139.178.89.65:42278.service: Deactivated successfully. Aug 5 22:12:37.380187 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:12:37.381233 systemd-logind[1946]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:12:37.394453 systemd[1]: Started sshd@4-172.31.17.118:22-139.178.89.65:42284.service - OpenSSH per-connection server daemon (139.178.89.65:42284). Aug 5 22:12:37.406594 systemd-logind[1946]: Removed session 4. Aug 5 22:12:37.608365 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 42284 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:37.615460 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:37.635177 systemd-logind[1946]: New session 5 of user core. Aug 5 22:12:37.646683 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:12:37.794951 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:12:37.795602 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:37.815040 sudo[2246]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:37.839827 sshd[2243]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:37.845459 systemd[1]: sshd@4-172.31.17.118:22-139.178.89.65:42284.service: Deactivated successfully. Aug 5 22:12:37.847862 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:12:37.849884 systemd-logind[1946]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:12:37.851307 systemd-logind[1946]: Removed session 5. Aug 5 22:12:37.881668 systemd[1]: Started sshd@5-172.31.17.118:22-139.178.89.65:42294.service - OpenSSH per-connection server daemon (139.178.89.65:42294). Aug 5 22:12:38.072022 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 42294 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:38.073851 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:38.087423 systemd-logind[1946]: New session 6 of user core. Aug 5 22:12:38.100444 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:12:38.229167 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:12:38.229582 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:38.241585 sudo[2255]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:38.247271 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:12:38.247645 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:38.262387 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:38.266287 auditctl[2258]: No rules Aug 5 22:12:38.266693 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:12:38.267017 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:38.270320 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:38.364438 augenrules[2276]: No rules Aug 5 22:12:38.368282 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:38.374820 sudo[2254]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:38.403742 sshd[2251]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:38.413717 systemd[1]: sshd@5-172.31.17.118:22-139.178.89.65:42294.service: Deactivated successfully. Aug 5 22:12:38.418140 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:12:38.426457 systemd-logind[1946]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:12:38.465506 systemd[1]: Started sshd@6-172.31.17.118:22-139.178.89.65:42296.service - OpenSSH per-connection server daemon (139.178.89.65:42296). Aug 5 22:12:38.467937 systemd-logind[1946]: Removed session 6. Aug 5 22:12:38.681483 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 42296 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:38.683456 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:38.711372 systemd-logind[1946]: New session 7 of user core. Aug 5 22:12:38.721327 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:12:38.854463 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:12:38.855281 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:40.532713 systemd-resolved[1765]: Clock change detected. Flushing caches. Aug 5 22:12:40.537059 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:12:40.547125 (dockerd)[2296]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:12:41.464338 dockerd[2296]: time="2024-08-05T22:12:41.464267371Z" level=info msg="Starting up" Aug 5 22:12:41.805663 dockerd[2296]: time="2024-08-05T22:12:41.805226335Z" level=info msg="Loading containers: start." Aug 5 22:12:42.238668 kernel: Initializing XFRM netlink socket Aug 5 22:12:42.343623 (udev-worker)[2311]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:42.543395 systemd-networkd[1807]: docker0: Link UP Aug 5 22:12:42.584319 dockerd[2296]: time="2024-08-05T22:12:42.584193182Z" level=info msg="Loading containers: done." Aug 5 22:12:42.866944 dockerd[2296]: time="2024-08-05T22:12:42.866889417Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:12:42.867168 dockerd[2296]: time="2024-08-05T22:12:42.867145024Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:12:42.867422 dockerd[2296]: time="2024-08-05T22:12:42.867391509Z" level=info msg="Daemon has completed initialization" Aug 5 22:12:42.925675 dockerd[2296]: time="2024-08-05T22:12:42.924996542Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:12:42.925098 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:12:44.256355 containerd[1970]: time="2024-08-05T22:12:44.256250716Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\"" Aug 5 22:12:45.089910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519129084.mount: Deactivated successfully. Aug 5 22:12:48.880936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:12:48.906017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:49.662187 containerd[1970]: time="2024-08-05T22:12:49.662129052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:49.719537 containerd[1970]: time="2024-08-05T22:12:49.719018729Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.3: active requests=0, bytes read=32773238" Aug 5 22:12:49.754534 containerd[1970]: time="2024-08-05T22:12:49.754456974Z" level=info msg="ImageCreate event name:\"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:49.824518 containerd[1970]: time="2024-08-05T22:12:49.824414067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:49.826020 containerd[1970]: time="2024-08-05T22:12:49.825968517Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.3\" with image id \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\", size \"32770038\" in 5.569679495s" Aug 5 22:12:49.826020 containerd[1970]: time="2024-08-05T22:12:49.826011653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\" returns image reference \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\"" Aug 5 22:12:49.948384 containerd[1970]: time="2024-08-05T22:12:49.948112780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\"" Aug 5 22:12:50.485931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:50.506573 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:50.593663 kubelet[2498]: E0805 22:12:50.592628 2498 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:50.598564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:50.598835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:54.387068 containerd[1970]: time="2024-08-05T22:12:54.387012761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:54.388667 containerd[1970]: time="2024-08-05T22:12:54.388542765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.3: active requests=0, bytes read=29589535" Aug 5 22:12:54.391409 containerd[1970]: time="2024-08-05T22:12:54.390602187Z" level=info msg="ImageCreate event name:\"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:54.395342 containerd[1970]: time="2024-08-05T22:12:54.395290934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:54.397035 containerd[1970]: time="2024-08-05T22:12:54.396988268Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.3\" with image id \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\", size \"31139481\" in 4.448840416s" Aug 5 22:12:54.397035 containerd[1970]: time="2024-08-05T22:12:54.397031054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\" returns image reference \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\"" Aug 5 22:12:54.442165 containerd[1970]: time="2024-08-05T22:12:54.442098387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\"" Aug 5 22:12:56.830053 containerd[1970]: time="2024-08-05T22:12:56.829992346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:56.859761 containerd[1970]: time="2024-08-05T22:12:56.859673221Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.3: active requests=0, bytes read=17779544" Aug 5 22:12:56.931760 containerd[1970]: time="2024-08-05T22:12:56.931677603Z" level=info msg="ImageCreate event name:\"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:56.969597 containerd[1970]: time="2024-08-05T22:12:56.969435196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:56.970813 containerd[1970]: time="2024-08-05T22:12:56.970623903Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.3\" with image id \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\", size \"19329508\" in 2.528448278s" Aug 5 22:12:56.970813 containerd[1970]: time="2024-08-05T22:12:56.970689693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\" returns image reference \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\"" Aug 5 22:12:57.022019 containerd[1970]: time="2024-08-05T22:12:57.021973472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\"" Aug 5 22:12:58.748506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093255377.mount: Deactivated successfully. Aug 5 22:12:59.750765 containerd[1970]: time="2024-08-05T22:12:59.750710660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:59.753607 containerd[1970]: time="2024-08-05T22:12:59.753526358Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.3: active requests=0, bytes read=29036435" Aug 5 22:12:59.756981 containerd[1970]: time="2024-08-05T22:12:59.756906322Z" level=info msg="ImageCreate event name:\"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:59.774484 containerd[1970]: time="2024-08-05T22:12:59.773371339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:59.774484 containerd[1970]: time="2024-08-05T22:12:59.774292736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.3\" with image id \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\", repo tag \"registry.k8s.io/kube-proxy:v1.30.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\", size \"29035454\" in 2.752269061s" Aug 5 22:12:59.774484 containerd[1970]: time="2024-08-05T22:12:59.774334497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\" returns image reference \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\"" Aug 5 22:12:59.818869 containerd[1970]: time="2024-08-05T22:12:59.818757724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:13:00.617195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106631534.mount: Deactivated successfully. Aug 5 22:13:00.631422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:13:00.643945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:01.132134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:01.138309 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:13:01.408724 kubelet[2552]: E0805 22:13:01.408135 2552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:13:01.415890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:13:01.416083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:13:04.530688 containerd[1970]: time="2024-08-05T22:13:04.530621562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:04.532290 containerd[1970]: time="2024-08-05T22:13:04.532188311Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Aug 5 22:13:04.534745 containerd[1970]: time="2024-08-05T22:13:04.534687200Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:04.539425 containerd[1970]: time="2024-08-05T22:13:04.539371493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:04.540843 containerd[1970]: time="2024-08-05T22:13:04.540793007Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.721518558s" Aug 5 22:13:04.540968 containerd[1970]: time="2024-08-05T22:13:04.540847315Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:13:04.568395 containerd[1970]: time="2024-08-05T22:13:04.568357157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:13:04.646976 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 22:13:05.164006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332015914.mount: Deactivated successfully. Aug 5 22:13:05.179814 containerd[1970]: time="2024-08-05T22:13:05.179746097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:05.181310 containerd[1970]: time="2024-08-05T22:13:05.181245757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:13:05.187754 containerd[1970]: time="2024-08-05T22:13:05.187691271Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:05.199432 containerd[1970]: time="2024-08-05T22:13:05.199349725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:05.206731 containerd[1970]: time="2024-08-05T22:13:05.203816465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 635.413203ms" Aug 5 22:13:05.206731 containerd[1970]: time="2024-08-05T22:13:05.203874319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:13:05.296848 containerd[1970]: time="2024-08-05T22:13:05.296800297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Aug 5 22:13:06.040051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899307366.mount: Deactivated successfully. Aug 5 22:13:11.189486 containerd[1970]: time="2024-08-05T22:13:11.189417385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:11.210088 containerd[1970]: time="2024-08-05T22:13:11.209925459Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Aug 5 22:13:11.226916 containerd[1970]: time="2024-08-05T22:13:11.226857915Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:11.238844 containerd[1970]: time="2024-08-05T22:13:11.238745767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:11.246062 containerd[1970]: time="2024-08-05T22:13:11.239843826Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.94300246s" Aug 5 22:13:11.246062 containerd[1970]: time="2024-08-05T22:13:11.239896305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Aug 5 22:13:11.499799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:13:11.522961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:13.274517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:13.296154 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:13:13.460830 kubelet[2728]: E0805 22:13:13.460611 2728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:13:13.466015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:13:13.466217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:13:17.344097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:17.364226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:17.427970 systemd[1]: Reloading requested from client PID 2742 ('systemctl') (unit session-7.scope)... Aug 5 22:13:17.427993 systemd[1]: Reloading... Aug 5 22:13:17.624719 zram_generator::config[2784]: No configuration found. Aug 5 22:13:17.851207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:13:18.090272 systemd[1]: Reloading finished in 661 ms. Aug 5 22:13:18.216996 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:13:18.217506 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:13:18.217829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:18.226379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:19.036842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:19.066790 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:13:19.126509 update_engine[1947]: I0805 22:13:19.126452 1947 update_attempter.cc:509] Updating boot flags... Aug 5 22:13:19.219722 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:19.219722 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:13:19.219722 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:19.223752 kubelet[2838]: I0805 22:13:19.221562 2838 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:13:19.297679 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2859) Aug 5 22:13:19.696867 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2849) Aug 5 22:13:20.055757 kubelet[2838]: I0805 22:13:20.055304 2838 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:13:20.055757 kubelet[2838]: I0805 22:13:20.055345 2838 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:13:20.055757 kubelet[2838]: I0805 22:13:20.055673 2838 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:13:20.087153 kubelet[2838]: I0805 22:13:20.087108 2838 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:20.092828 kubelet[2838]: E0805 22:13:20.092793 2838 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.163252 kubelet[2838]: I0805 22:13:20.162592 2838 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:13:20.163252 kubelet[2838]: I0805 22:13:20.162869 2838 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:13:20.163252 kubelet[2838]: I0805 22:13:20.162905 2838 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-118","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:13:20.163252 kubelet[2838]: I0805 22:13:20.163089 2838 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:13:20.163505 kubelet[2838]: I0805 22:13:20.163106 2838 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:13:20.164536 kubelet[2838]: I0805 22:13:20.164507 2838 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:20.170521 kubelet[2838]: W0805 22:13:20.168146 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-118&limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.170521 kubelet[2838]: E0805 22:13:20.168553 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-118&limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.170521 kubelet[2838]: I0805 22:13:20.169917 2838 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:13:20.170521 kubelet[2838]: I0805 22:13:20.169943 2838 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:13:20.170521 kubelet[2838]: I0805 22:13:20.170153 2838 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:13:20.170521 kubelet[2838]: I0805 22:13:20.170181 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:13:20.180763 kubelet[2838]: W0805 22:13:20.180670 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.181055 kubelet[2838]: E0805 22:13:20.180776 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.181556 kubelet[2838]: I0805 22:13:20.181476 2838 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:13:20.185238 kubelet[2838]: I0805 22:13:20.185201 2838 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:13:20.185686 kubelet[2838]: W0805 22:13:20.185321 2838 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:13:20.186242 kubelet[2838]: I0805 22:13:20.186211 2838 server.go:1264] "Started kubelet" Aug 5 22:13:20.192981 kubelet[2838]: I0805 22:13:20.189592 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:13:20.193365 kubelet[2838]: I0805 22:13:20.193341 2838 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:13:20.195974 kubelet[2838]: I0805 22:13:20.195936 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:13:20.208791 kubelet[2838]: I0805 22:13:20.205739 2838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:13:20.209317 kubelet[2838]: E0805 22:13:20.208964 2838 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.118:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.118:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-118.17e8f4c8ac0a00a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-118,UID:ip-172-31-17-118,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-118,},FirstTimestamp:2024-08-05 22:13:20.186187945 +0000 UTC m=+1.108574509,LastTimestamp:2024-08-05 22:13:20.186187945 +0000 UTC m=+1.108574509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-118,}" Aug 5 22:13:20.213086 kubelet[2838]: I0805 22:13:20.213027 2838 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:13:20.216518 kubelet[2838]: I0805 22:13:20.216484 2838 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:13:20.218948 kubelet[2838]: I0805 22:13:20.218923 2838 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:13:20.219372 kubelet[2838]: I0805 22:13:20.219357 2838 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:13:20.221367 kubelet[2838]: W0805 22:13:20.221314 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.222243 kubelet[2838]: E0805 22:13:20.221373 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.222243 kubelet[2838]: I0805 22:13:20.221901 2838 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:13:20.222243 kubelet[2838]: I0805 22:13:20.222197 2838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:13:20.226832 kubelet[2838]: E0805 22:13:20.226756 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": dial tcp 172.31.17.118:6443: connect: connection refused" interval="200ms" Aug 5 22:13:20.230201 kubelet[2838]: E0805 22:13:20.230078 2838 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:13:20.230919 kubelet[2838]: I0805 22:13:20.230778 2838 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:13:20.272215 kubelet[2838]: I0805 22:13:20.272129 2838 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:13:20.274218 kubelet[2838]: I0805 22:13:20.272397 2838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:13:20.274535 kubelet[2838]: I0805 22:13:20.274359 2838 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:20.285081 kubelet[2838]: I0805 22:13:20.285054 2838 policy_none.go:49] "None policy: Start" Aug 5 22:13:20.288700 kubelet[2838]: I0805 22:13:20.287901 2838 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:13:20.288700 kubelet[2838]: I0805 22:13:20.287931 2838 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:13:20.301899 kubelet[2838]: I0805 22:13:20.301360 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:13:20.304871 kubelet[2838]: I0805 22:13:20.304845 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:13:20.305853 kubelet[2838]: I0805 22:13:20.305759 2838 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:13:20.309680 kubelet[2838]: I0805 22:13:20.305982 2838 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:13:20.312013 kubelet[2838]: E0805 22:13:20.311223 2838 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:13:20.314393 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:13:20.324726 kubelet[2838]: W0805 22:13:20.315729 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.324726 kubelet[2838]: E0805 22:13:20.315781 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:20.348914 kubelet[2838]: I0805 22:13:20.348881 2838 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-118" Aug 5 22:13:20.349454 kubelet[2838]: E0805 22:13:20.349425 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.118:6443/api/v1/nodes\": dial tcp 172.31.17.118:6443: connect: connection refused" node="ip-172-31-17-118" Aug 5 22:13:20.358687 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:13:20.374744 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:13:20.376424 kubelet[2838]: I0805 22:13:20.376402 2838 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:13:20.377168 kubelet[2838]: I0805 22:13:20.377109 2838 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:13:20.377966 kubelet[2838]: I0805 22:13:20.377262 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:13:20.379607 kubelet[2838]: E0805 22:13:20.379557 2838 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-118\" not found" Aug 5 22:13:20.411661 kubelet[2838]: I0805 22:13:20.411588 2838 topology_manager.go:215] "Topology Admit Handler" podUID="0a1cbae1c5c93e3cb5f78c9fe9be9ad2" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-118" Aug 5 22:13:20.414262 kubelet[2838]: I0805 22:13:20.414058 2838 topology_manager.go:215] "Topology Admit Handler" podUID="1ca4eef2d6a5fc89285b5948f08ccea5" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:20.416890 kubelet[2838]: I0805 22:13:20.416858 2838 topology_manager.go:215] "Topology Admit Handler" podUID="d30effa94f1109ddbb45925e31185d93" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-118" Aug 5 22:13:20.428005 systemd[1]: Created slice kubepods-burstable-pod0a1cbae1c5c93e3cb5f78c9fe9be9ad2.slice - libcontainer container kubepods-burstable-pod0a1cbae1c5c93e3cb5f78c9fe9be9ad2.slice. Aug 5 22:13:20.428811 kubelet[2838]: E0805 22:13:20.428582 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": dial tcp 172.31.17.118:6443: connect: connection refused" interval="400ms" Aug 5 22:13:20.435888 kubelet[2838]: I0805 22:13:20.435838 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:20.435888 kubelet[2838]: I0805 22:13:20.435890 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a1cbae1c5c93e3cb5f78c9fe9be9ad2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-118\" (UID: \"0a1cbae1c5c93e3cb5f78c9fe9be9ad2\") " pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:20.436242 kubelet[2838]: I0805 22:13:20.435918 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:20.436242 kubelet[2838]: I0805 22:13:20.436095 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:20.436242 kubelet[2838]: I0805 22:13:20.436131 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d30effa94f1109ddbb45925e31185d93-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-118\" (UID: \"d30effa94f1109ddbb45925e31185d93\") " pod="kube-system/kube-scheduler-ip-172-31-17-118" Aug 5 22:13:20.436242 kubelet[2838]: I0805 22:13:20.436155 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a1cbae1c5c93e3cb5f78c9fe9be9ad2-ca-certs\") pod \"kube-apiserver-ip-172-31-17-118\" (UID: \"0a1cbae1c5c93e3cb5f78c9fe9be9ad2\") " pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:20.436242 kubelet[2838]: I0805 22:13:20.436175 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a1cbae1c5c93e3cb5f78c9fe9be9ad2-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-118\" (UID: \"0a1cbae1c5c93e3cb5f78c9fe9be9ad2\") " pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:20.436416 kubelet[2838]: I0805 22:13:20.436197 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:20.436416 kubelet[2838]: I0805 22:13:20.436223 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:20.450266 systemd[1]: Created slice kubepods-burstable-pod1ca4eef2d6a5fc89285b5948f08ccea5.slice - libcontainer container kubepods-burstable-pod1ca4eef2d6a5fc89285b5948f08ccea5.slice. Aug 5 22:13:20.461353 systemd[1]: Created slice kubepods-burstable-podd30effa94f1109ddbb45925e31185d93.slice - libcontainer container kubepods-burstable-podd30effa94f1109ddbb45925e31185d93.slice. Aug 5 22:13:20.552750 kubelet[2838]: I0805 22:13:20.552711 2838 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-118" Aug 5 22:13:20.553417 kubelet[2838]: E0805 22:13:20.553375 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.118:6443/api/v1/nodes\": dial tcp 172.31.17.118:6443: connect: connection refused" node="ip-172-31-17-118" Aug 5 22:13:20.749572 containerd[1970]: time="2024-08-05T22:13:20.749516313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-118,Uid:0a1cbae1c5c93e3cb5f78c9fe9be9ad2,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:20.765004 containerd[1970]: time="2024-08-05T22:13:20.764943708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-118,Uid:1ca4eef2d6a5fc89285b5948f08ccea5,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:20.770758 containerd[1970]: time="2024-08-05T22:13:20.770447060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-118,Uid:d30effa94f1109ddbb45925e31185d93,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:20.829771 kubelet[2838]: E0805 22:13:20.829714 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": dial tcp 172.31.17.118:6443: connect: connection refused" interval="800ms" Aug 5 22:13:20.959846 kubelet[2838]: I0805 22:13:20.956353 2838 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-118" Aug 5 22:13:20.959846 kubelet[2838]: E0805 22:13:20.957135 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.118:6443/api/v1/nodes\": dial tcp 172.31.17.118:6443: connect: connection refused" node="ip-172-31-17-118" Aug 5 22:13:21.338012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974533062.mount: Deactivated successfully. Aug 5 22:13:21.352181 containerd[1970]: time="2024-08-05T22:13:21.352123792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:21.353751 containerd[1970]: time="2024-08-05T22:13:21.353613220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:13:21.355183 containerd[1970]: time="2024-08-05T22:13:21.355144727Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:21.356633 containerd[1970]: time="2024-08-05T22:13:21.356594381Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:21.358210 containerd[1970]: time="2024-08-05T22:13:21.358068676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:13:21.360444 containerd[1970]: time="2024-08-05T22:13:21.360391203Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:21.361841 containerd[1970]: time="2024-08-05T22:13:21.361724746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:13:21.365706 containerd[1970]: time="2024-08-05T22:13:21.365658095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:21.367691 containerd[1970]: time="2024-08-05T22:13:21.366758938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.912276ms" Aug 5 22:13:21.379622 containerd[1970]: time="2024-08-05T22:13:21.378283991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.201242ms" Aug 5 22:13:21.379875 containerd[1970]: time="2024-08-05T22:13:21.379841341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.195264ms" Aug 5 22:13:21.543846 kubelet[2838]: W0805 22:13:21.543776 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.543846 kubelet[2838]: E0805 22:13:21.543849 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.548415 kubelet[2838]: W0805 22:13:21.548353 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-118&limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.548415 kubelet[2838]: E0805 22:13:21.548418 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-118&limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.632363 kubelet[2838]: E0805 22:13:21.632178 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": dial tcp 172.31.17.118:6443: connect: connection refused" interval="1.6s" Aug 5 22:13:21.705836 kubelet[2838]: W0805 22:13:21.705746 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.705836 kubelet[2838]: E0805 22:13:21.705802 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.789579 kubelet[2838]: I0805 22:13:21.787434 2838 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-118" Aug 5 22:13:21.794754 kubelet[2838]: W0805 22:13:21.793897 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.794754 kubelet[2838]: E0805 22:13:21.793982 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:21.794957 kubelet[2838]: E0805 22:13:21.794866 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.118:6443/api/v1/nodes\": dial tcp 172.31.17.118:6443: connect: connection refused" node="ip-172-31-17-118" Aug 5 22:13:21.836124 containerd[1970]: time="2024-08-05T22:13:21.836009316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:21.837026 containerd[1970]: time="2024-08-05T22:13:21.836900779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:21.837150 containerd[1970]: time="2024-08-05T22:13:21.837116663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:21.837281 containerd[1970]: time="2024-08-05T22:13:21.837186320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:21.838576 containerd[1970]: time="2024-08-05T22:13:21.838331838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:21.838576 containerd[1970]: time="2024-08-05T22:13:21.836089399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:21.838576 containerd[1970]: time="2024-08-05T22:13:21.838281304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:21.838576 containerd[1970]: time="2024-08-05T22:13:21.838303336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:21.843479 containerd[1970]: time="2024-08-05T22:13:21.843350047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:21.845778 containerd[1970]: time="2024-08-05T22:13:21.843840284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:21.845778 containerd[1970]: time="2024-08-05T22:13:21.843926433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:21.845778 containerd[1970]: time="2024-08-05T22:13:21.844030180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:21.894941 systemd[1]: Started cri-containerd-b7043087a7b8ece33fd7dfa3514b5e13dd62e43bf4bd5c9d9cba6e8af32b6c16.scope - libcontainer container b7043087a7b8ece33fd7dfa3514b5e13dd62e43bf4bd5c9d9cba6e8af32b6c16. Aug 5 22:13:21.906994 systemd[1]: Started cri-containerd-660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078.scope - libcontainer container 660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078. Aug 5 22:13:21.910301 systemd[1]: Started cri-containerd-81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95.scope - libcontainer container 81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95. Aug 5 22:13:21.999902 containerd[1970]: time="2024-08-05T22:13:21.999745393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-118,Uid:0a1cbae1c5c93e3cb5f78c9fe9be9ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7043087a7b8ece33fd7dfa3514b5e13dd62e43bf4bd5c9d9cba6e8af32b6c16\"" Aug 5 22:13:22.007603 containerd[1970]: time="2024-08-05T22:13:22.007546012Z" level=info msg="CreateContainer within sandbox \"b7043087a7b8ece33fd7dfa3514b5e13dd62e43bf4bd5c9d9cba6e8af32b6c16\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:13:22.058583 containerd[1970]: time="2024-08-05T22:13:22.058131292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-118,Uid:1ca4eef2d6a5fc89285b5948f08ccea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078\"" Aug 5 22:13:22.068491 containerd[1970]: time="2024-08-05T22:13:22.068445846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-118,Uid:d30effa94f1109ddbb45925e31185d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95\"" Aug 5 22:13:22.084943 containerd[1970]: time="2024-08-05T22:13:22.084660974Z" level=info msg="CreateContainer within sandbox \"81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:13:22.084943 containerd[1970]: time="2024-08-05T22:13:22.084838079Z" level=info msg="CreateContainer within sandbox \"660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:13:22.091037 containerd[1970]: time="2024-08-05T22:13:22.090977940Z" level=info msg="CreateContainer within sandbox \"b7043087a7b8ece33fd7dfa3514b5e13dd62e43bf4bd5c9d9cba6e8af32b6c16\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a41a5fbbae751d8167cdfdc150fbb7874528f566337b052bbeb5fe6c59118d2d\"" Aug 5 22:13:22.092274 containerd[1970]: time="2024-08-05T22:13:22.091855223Z" level=info msg="StartContainer for \"a41a5fbbae751d8167cdfdc150fbb7874528f566337b052bbeb5fe6c59118d2d\"" Aug 5 22:13:22.106370 kubelet[2838]: E0805 22:13:22.106190 2838 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.118:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.118:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-118.17e8f4c8ac0a00a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-118,UID:ip-172-31-17-118,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-118,},FirstTimestamp:2024-08-05 22:13:20.186187945 +0000 UTC m=+1.108574509,LastTimestamp:2024-08-05 22:13:20.186187945 +0000 UTC m=+1.108574509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-118,}" Aug 5 22:13:22.115066 containerd[1970]: time="2024-08-05T22:13:22.115006661Z" level=info msg="CreateContainer within sandbox \"660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b\"" Aug 5 22:13:22.115799 containerd[1970]: time="2024-08-05T22:13:22.115764450Z" level=info msg="StartContainer for \"3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b\"" Aug 5 22:13:22.123409 containerd[1970]: time="2024-08-05T22:13:22.123221916Z" level=info msg="CreateContainer within sandbox \"81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5\"" Aug 5 22:13:22.124329 containerd[1970]: time="2024-08-05T22:13:22.124295464Z" level=info msg="StartContainer for \"dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5\"" Aug 5 22:13:22.148478 kubelet[2838]: E0805 22:13:22.145750 2838 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.118:6443: connect: connection refused Aug 5 22:13:22.156901 systemd[1]: Started cri-containerd-a41a5fbbae751d8167cdfdc150fbb7874528f566337b052bbeb5fe6c59118d2d.scope - libcontainer container a41a5fbbae751d8167cdfdc150fbb7874528f566337b052bbeb5fe6c59118d2d. Aug 5 22:13:22.212695 systemd[1]: Started cri-containerd-3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b.scope - libcontainer container 3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b. Aug 5 22:13:22.234999 systemd[1]: Started cri-containerd-dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5.scope - libcontainer container dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5. Aug 5 22:13:22.321813 containerd[1970]: time="2024-08-05T22:13:22.321766767Z" level=info msg="StartContainer for \"a41a5fbbae751d8167cdfdc150fbb7874528f566337b052bbeb5fe6c59118d2d\" returns successfully" Aug 5 22:13:22.412894 containerd[1970]: time="2024-08-05T22:13:22.411838874Z" level=info msg="StartContainer for \"3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b\" returns successfully" Aug 5 22:13:22.478985 containerd[1970]: time="2024-08-05T22:13:22.476391709Z" level=info msg="StartContainer for \"dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5\" returns successfully" Aug 5 22:13:23.233472 kubelet[2838]: E0805 22:13:23.233416 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": dial tcp 172.31.17.118:6443: connect: connection refused" interval="3.2s" Aug 5 22:13:23.403658 kubelet[2838]: I0805 22:13:23.403455 2838 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-118" Aug 5 22:13:26.167416 kubelet[2838]: I0805 22:13:26.167078 2838 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-118" Aug 5 22:13:26.179881 kubelet[2838]: I0805 22:13:26.179635 2838 apiserver.go:52] "Watching apiserver" Aug 5 22:13:26.219618 kubelet[2838]: I0805 22:13:26.219587 2838 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:13:28.527767 systemd[1]: Reloading requested from client PID 3289 ('systemctl') (unit session-7.scope)... Aug 5 22:13:28.527788 systemd[1]: Reloading... Aug 5 22:13:28.682671 zram_generator::config[3333]: No configuration found. Aug 5 22:13:28.858847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:13:29.049668 systemd[1]: Reloading finished in 521 ms. Aug 5 22:13:29.131340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:29.132424 kubelet[2838]: E0805 22:13:29.132038 2838 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-17-118.17e8f4c8ac0a00a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-118,UID:ip-172-31-17-118,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-118,},FirstTimestamp:2024-08-05 22:13:20.186187945 +0000 UTC m=+1.108574509,LastTimestamp:2024-08-05 22:13:20.186187945 +0000 UTC m=+1.108574509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-118,}" Aug 5 22:13:29.157262 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:13:29.157735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:29.169814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:29.973933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:29.982179 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:13:30.105068 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:30.105068 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:13:30.105068 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:30.111372 kubelet[3384]: I0805 22:13:30.110312 3384 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:13:30.137237 kubelet[3384]: I0805 22:13:30.137189 3384 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:13:30.137405 kubelet[3384]: I0805 22:13:30.137393 3384 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:13:30.137825 kubelet[3384]: I0805 22:13:30.137782 3384 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:13:30.151687 kubelet[3384]: I0805 22:13:30.151592 3384 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:13:30.162775 kubelet[3384]: I0805 22:13:30.162724 3384 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:30.175516 kubelet[3384]: I0805 22:13:30.174372 3384 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:13:30.175516 kubelet[3384]: I0805 22:13:30.174819 3384 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:13:30.175516 kubelet[3384]: I0805 22:13:30.174855 3384 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-118","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:13:30.176109 kubelet[3384]: I0805 22:13:30.176083 3384 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:13:30.176109 kubelet[3384]: I0805 22:13:30.176114 3384 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:13:30.176279 kubelet[3384]: I0805 22:13:30.176180 3384 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:30.176393 kubelet[3384]: I0805 22:13:30.176373 3384 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:13:30.176452 kubelet[3384]: I0805 22:13:30.176394 3384 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:13:30.184361 kubelet[3384]: I0805 22:13:30.177168 3384 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:13:30.184361 kubelet[3384]: I0805 22:13:30.177199 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:13:30.184361 kubelet[3384]: I0805 22:13:30.178458 3384 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:13:30.192863 kubelet[3384]: I0805 22:13:30.191185 3384 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:13:30.192863 kubelet[3384]: I0805 22:13:30.192622 3384 server.go:1264] "Started kubelet" Aug 5 22:13:30.212148 kubelet[3384]: I0805 22:13:30.211164 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:13:30.218959 kubelet[3384]: I0805 22:13:30.218908 3384 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:13:30.223144 kubelet[3384]: I0805 22:13:30.223064 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:13:30.233593 kubelet[3384]: I0805 22:13:30.233421 3384 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:13:30.238042 kubelet[3384]: I0805 22:13:30.236196 3384 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:13:30.239712 kubelet[3384]: I0805 22:13:30.238512 3384 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:13:30.240954 kubelet[3384]: I0805 22:13:30.240749 3384 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:13:30.245658 kubelet[3384]: I0805 22:13:30.245622 3384 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:13:30.270348 kubelet[3384]: I0805 22:13:30.270318 3384 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:13:30.270494 kubelet[3384]: I0805 22:13:30.270426 3384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:13:30.281513 kubelet[3384]: I0805 22:13:30.281454 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:13:30.289039 kubelet[3384]: I0805 22:13:30.288825 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:13:30.289039 kubelet[3384]: I0805 22:13:30.288865 3384 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:13:30.289039 kubelet[3384]: I0805 22:13:30.288892 3384 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:13:30.289039 kubelet[3384]: E0805 22:13:30.288947 3384 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:13:30.337951 kubelet[3384]: E0805 22:13:30.337837 3384 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:13:30.338365 kubelet[3384]: I0805 22:13:30.338345 3384 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:13:30.355425 kubelet[3384]: E0805 22:13:30.355384 3384 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 5 22:13:30.362495 kubelet[3384]: I0805 22:13:30.362341 3384 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-118" Aug 5 22:13:30.386166 kubelet[3384]: I0805 22:13:30.385405 3384 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-118" Aug 5 22:13:30.386166 kubelet[3384]: I0805 22:13:30.385489 3384 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-118" Aug 5 22:13:30.392277 kubelet[3384]: E0805 22:13:30.390941 3384 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:13:30.485751 kubelet[3384]: I0805 22:13:30.484693 3384 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:13:30.485751 kubelet[3384]: I0805 22:13:30.484714 3384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:13:30.485751 kubelet[3384]: I0805 22:13:30.484743 3384 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:30.485751 kubelet[3384]: I0805 22:13:30.484938 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:13:30.485751 kubelet[3384]: I0805 22:13:30.484950 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:13:30.485751 kubelet[3384]: I0805 22:13:30.484976 3384 policy_none.go:49] "None policy: Start" Aug 5 22:13:30.487582 kubelet[3384]: I0805 22:13:30.487281 3384 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:13:30.487870 kubelet[3384]: I0805 22:13:30.487695 3384 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:13:30.488237 kubelet[3384]: I0805 22:13:30.488191 3384 state_mem.go:75] "Updated machine memory state" Aug 5 22:13:30.496489 kubelet[3384]: I0805 22:13:30.495623 3384 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:13:30.496489 kubelet[3384]: I0805 22:13:30.495816 3384 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:13:30.496489 kubelet[3384]: I0805 22:13:30.496376 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:13:30.593464 kubelet[3384]: I0805 22:13:30.593408 3384 topology_manager.go:215] "Topology Admit Handler" podUID="0a1cbae1c5c93e3cb5f78c9fe9be9ad2" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-118" Aug 5 22:13:30.593819 kubelet[3384]: I0805 22:13:30.593800 3384 topology_manager.go:215] "Topology Admit Handler" podUID="1ca4eef2d6a5fc89285b5948f08ccea5" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.594903 kubelet[3384]: I0805 22:13:30.594849 3384 topology_manager.go:215] "Topology Admit Handler" podUID="d30effa94f1109ddbb45925e31185d93" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-118" Aug 5 22:13:30.608084 kubelet[3384]: E0805 22:13:30.607938 3384 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-118\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.647017 kubelet[3384]: I0805 22:13:30.646832 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a1cbae1c5c93e3cb5f78c9fe9be9ad2-ca-certs\") pod \"kube-apiserver-ip-172-31-17-118\" (UID: \"0a1cbae1c5c93e3cb5f78c9fe9be9ad2\") " pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:30.647017 kubelet[3384]: I0805 22:13:30.646884 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a1cbae1c5c93e3cb5f78c9fe9be9ad2-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-118\" (UID: \"0a1cbae1c5c93e3cb5f78c9fe9be9ad2\") " pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:30.647017 kubelet[3384]: I0805 22:13:30.646917 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a1cbae1c5c93e3cb5f78c9fe9be9ad2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-118\" (UID: \"0a1cbae1c5c93e3cb5f78c9fe9be9ad2\") " pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:30.747793 kubelet[3384]: I0805 22:13:30.747597 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.753039 kubelet[3384]: I0805 22:13:30.749281 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.753039 kubelet[3384]: I0805 22:13:30.749469 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.753039 kubelet[3384]: I0805 22:13:30.751241 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.753039 kubelet[3384]: I0805 22:13:30.752393 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ca4eef2d6a5fc89285b5948f08ccea5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-118\" (UID: \"1ca4eef2d6a5fc89285b5948f08ccea5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-118" Aug 5 22:13:30.753039 kubelet[3384]: I0805 22:13:30.752507 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d30effa94f1109ddbb45925e31185d93-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-118\" (UID: \"d30effa94f1109ddbb45925e31185d93\") " pod="kube-system/kube-scheduler-ip-172-31-17-118" Aug 5 22:13:31.206755 kubelet[3384]: I0805 22:13:31.206303 3384 apiserver.go:52] "Watching apiserver" Aug 5 22:13:31.242876 kubelet[3384]: I0805 22:13:31.240072 3384 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:13:31.315687 kubelet[3384]: I0805 22:13:31.313776 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-118" podStartSLOduration=1.313521555 podStartE2EDuration="1.313521555s" podCreationTimestamp="2024-08-05 22:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:31.310831139 +0000 UTC m=+1.316158526" watchObservedRunningTime="2024-08-05 22:13:31.313521555 +0000 UTC m=+1.318848937" Aug 5 22:13:31.355057 kubelet[3384]: I0805 22:13:31.354723 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-118" podStartSLOduration=1.354704907 podStartE2EDuration="1.354704907s" podCreationTimestamp="2024-08-05 22:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:31.32702698 +0000 UTC m=+1.332354367" watchObservedRunningTime="2024-08-05 22:13:31.354704907 +0000 UTC m=+1.360032293" Aug 5 22:13:31.377254 kubelet[3384]: I0805 22:13:31.377170 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-118" podStartSLOduration=4.377155142 podStartE2EDuration="4.377155142s" podCreationTimestamp="2024-08-05 22:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:31.358005994 +0000 UTC m=+1.363333374" watchObservedRunningTime="2024-08-05 22:13:31.377155142 +0000 UTC m=+1.382482531" Aug 5 22:13:31.424742 kubelet[3384]: E0805 22:13:31.424636 3384 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-118\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-118" Aug 5 22:13:37.410788 sudo[2287]: pam_unix(sudo:session): session closed for user root Aug 5 22:13:37.434949 sshd[2284]: pam_unix(sshd:session): session closed for user core Aug 5 22:13:37.451651 systemd[1]: sshd@6-172.31.17.118:22-139.178.89.65:42296.service: Deactivated successfully. Aug 5 22:13:37.476279 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:13:37.476595 systemd[1]: session-7.scope: Consumed 5.445s CPU time, 136.4M memory peak, 0B memory swap peak. Aug 5 22:13:37.485081 systemd-logind[1946]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:13:37.486970 systemd-logind[1946]: Removed session 7. Aug 5 22:13:42.971725 kubelet[3384]: I0805 22:13:42.971406 3384 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:13:42.974503 containerd[1970]: time="2024-08-05T22:13:42.974444526Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:13:42.976424 kubelet[3384]: I0805 22:13:42.974797 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:13:43.261484 kubelet[3384]: I0805 22:13:43.261317 3384 topology_manager.go:215] "Topology Admit Handler" podUID="1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3" podNamespace="kube-system" podName="kube-proxy-m8stf" Aug 5 22:13:43.277486 systemd[1]: Created slice kubepods-besteffort-pod1e9e7c3c_9cc9_46bc_8134_8b9c7cbbd2f3.slice - libcontainer container kubepods-besteffort-pod1e9e7c3c_9cc9_46bc_8134_8b9c7cbbd2f3.slice. Aug 5 22:13:43.360086 kubelet[3384]: I0805 22:13:43.360039 3384 topology_manager.go:215] "Topology Admit Handler" podUID="7e9cf25f-ea35-4a0e-9463-fcdf9794b808" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-ngbfg" Aug 5 22:13:43.370903 systemd[1]: Created slice kubepods-besteffort-pod7e9cf25f_ea35_4a0e_9463_fcdf9794b808.slice - libcontainer container kubepods-besteffort-pod7e9cf25f_ea35_4a0e_9463_fcdf9794b808.slice. Aug 5 22:13:43.375924 kubelet[3384]: I0805 22:13:43.375886 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3-kube-proxy\") pod \"kube-proxy-m8stf\" (UID: \"1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3\") " pod="kube-system/kube-proxy-m8stf" Aug 5 22:13:43.376066 kubelet[3384]: I0805 22:13:43.375931 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj755\" (UniqueName: \"kubernetes.io/projected/1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3-kube-api-access-kj755\") pod \"kube-proxy-m8stf\" (UID: \"1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3\") " pod="kube-system/kube-proxy-m8stf" Aug 5 22:13:43.376066 kubelet[3384]: I0805 22:13:43.375960 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3-xtables-lock\") pod \"kube-proxy-m8stf\" (UID: \"1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3\") " pod="kube-system/kube-proxy-m8stf" Aug 5 22:13:43.376066 kubelet[3384]: I0805 22:13:43.375984 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3-lib-modules\") pod \"kube-proxy-m8stf\" (UID: \"1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3\") " pod="kube-system/kube-proxy-m8stf" Aug 5 22:13:43.476938 kubelet[3384]: I0805 22:13:43.476584 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e9cf25f-ea35-4a0e-9463-fcdf9794b808-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-ngbfg\" (UID: \"7e9cf25f-ea35-4a0e-9463-fcdf9794b808\") " pod="tigera-operator/tigera-operator-76ff79f7fd-ngbfg" Aug 5 22:13:43.476938 kubelet[3384]: I0805 22:13:43.476681 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7qc\" (UniqueName: \"kubernetes.io/projected/7e9cf25f-ea35-4a0e-9463-fcdf9794b808-kube-api-access-th7qc\") pod \"tigera-operator-76ff79f7fd-ngbfg\" (UID: \"7e9cf25f-ea35-4a0e-9463-fcdf9794b808\") " pod="tigera-operator/tigera-operator-76ff79f7fd-ngbfg" Aug 5 22:13:43.593797 containerd[1970]: time="2024-08-05T22:13:43.593158183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8stf,Uid:1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:43.648125 containerd[1970]: time="2024-08-05T22:13:43.647799215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:43.648125 containerd[1970]: time="2024-08-05T22:13:43.647860118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:43.648125 containerd[1970]: time="2024-08-05T22:13:43.647881252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:43.648125 containerd[1970]: time="2024-08-05T22:13:43.647897168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:43.678499 containerd[1970]: time="2024-08-05T22:13:43.677937888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-ngbfg,Uid:7e9cf25f-ea35-4a0e-9463-fcdf9794b808,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:13:43.691919 systemd[1]: Started cri-containerd-3bfe1f24be287a13722682a1d4d89f805d2374d4f90f9be87569321c0d7a6f76.scope - libcontainer container 3bfe1f24be287a13722682a1d4d89f805d2374d4f90f9be87569321c0d7a6f76. Aug 5 22:13:43.780847 containerd[1970]: time="2024-08-05T22:13:43.780702757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8stf,Uid:1e9e7c3c-9cc9-46bc-8134-8b9c7cbbd2f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bfe1f24be287a13722682a1d4d89f805d2374d4f90f9be87569321c0d7a6f76\"" Aug 5 22:13:43.787013 containerd[1970]: time="2024-08-05T22:13:43.786891743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:43.787213 containerd[1970]: time="2024-08-05T22:13:43.786998253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:43.787213 containerd[1970]: time="2024-08-05T22:13:43.787053646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:43.787213 containerd[1970]: time="2024-08-05T22:13:43.787070154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:43.868997 containerd[1970]: time="2024-08-05T22:13:43.867467996Z" level=info msg="CreateContainer within sandbox \"3bfe1f24be287a13722682a1d4d89f805d2374d4f90f9be87569321c0d7a6f76\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:13:43.905104 systemd[1]: Started cri-containerd-673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9.scope - libcontainer container 673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9. Aug 5 22:13:43.972933 containerd[1970]: time="2024-08-05T22:13:43.972874346Z" level=info msg="CreateContainer within sandbox \"3bfe1f24be287a13722682a1d4d89f805d2374d4f90f9be87569321c0d7a6f76\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aedce0b0b1471320ed77e9e4084cc092f0de19caccea92d73234d7f9be175542\"" Aug 5 22:13:43.980865 containerd[1970]: time="2024-08-05T22:13:43.980823392Z" level=info msg="StartContainer for \"aedce0b0b1471320ed77e9e4084cc092f0de19caccea92d73234d7f9be175542\"" Aug 5 22:13:44.011246 containerd[1970]: time="2024-08-05T22:13:44.011046359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-ngbfg,Uid:7e9cf25f-ea35-4a0e-9463-fcdf9794b808,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9\"" Aug 5 22:13:44.016610 containerd[1970]: time="2024-08-05T22:13:44.016429850Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:13:44.040893 systemd[1]: Started cri-containerd-aedce0b0b1471320ed77e9e4084cc092f0de19caccea92d73234d7f9be175542.scope - libcontainer container aedce0b0b1471320ed77e9e4084cc092f0de19caccea92d73234d7f9be175542. Aug 5 22:13:44.090299 containerd[1970]: time="2024-08-05T22:13:44.089425796Z" level=info msg="StartContainer for \"aedce0b0b1471320ed77e9e4084cc092f0de19caccea92d73234d7f9be175542\" returns successfully" Aug 5 22:13:44.510736 systemd[1]: run-containerd-runc-k8s.io-3bfe1f24be287a13722682a1d4d89f805d2374d4f90f9be87569321c0d7a6f76-runc.Cj2nNn.mount: Deactivated successfully. Aug 5 22:13:45.589293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256456846.mount: Deactivated successfully. Aug 5 22:13:46.681208 containerd[1970]: time="2024-08-05T22:13:46.679523371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:46.688887 containerd[1970]: time="2024-08-05T22:13:46.683169107Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076108" Aug 5 22:13:46.693561 containerd[1970]: time="2024-08-05T22:13:46.693408247Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:46.737140 containerd[1970]: time="2024-08-05T22:13:46.737074225Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:46.748469 containerd[1970]: time="2024-08-05T22:13:46.738255558Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.721773765s" Aug 5 22:13:46.748469 containerd[1970]: time="2024-08-05T22:13:46.738465759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:13:46.748469 containerd[1970]: time="2024-08-05T22:13:46.743214909Z" level=info msg="CreateContainer within sandbox \"673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:13:46.849686 containerd[1970]: time="2024-08-05T22:13:46.849619388Z" level=info msg="CreateContainer within sandbox \"673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db\"" Aug 5 22:13:46.850841 containerd[1970]: time="2024-08-05T22:13:46.850786225Z" level=info msg="StartContainer for \"e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db\"" Aug 5 22:13:46.901932 systemd[1]: Started cri-containerd-e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db.scope - libcontainer container e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db. Aug 5 22:13:46.961625 containerd[1970]: time="2024-08-05T22:13:46.961055866Z" level=info msg="StartContainer for \"e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db\" returns successfully" Aug 5 22:13:47.572212 kubelet[3384]: I0805 22:13:47.572044 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m8stf" podStartSLOduration=4.572018696 podStartE2EDuration="4.572018696s" podCreationTimestamp="2024-08-05 22:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:44.490674563 +0000 UTC m=+14.496001951" watchObservedRunningTime="2024-08-05 22:13:47.572018696 +0000 UTC m=+17.577346082" Aug 5 22:13:51.301845 kubelet[3384]: I0805 22:13:51.301770 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-ngbfg" podStartSLOduration=5.577259046 podStartE2EDuration="8.301748192s" podCreationTimestamp="2024-08-05 22:13:43 +0000 UTC" firstStartedPulling="2024-08-05 22:13:44.015354986 +0000 UTC m=+14.020682356" lastFinishedPulling="2024-08-05 22:13:46.739844127 +0000 UTC m=+16.745171502" observedRunningTime="2024-08-05 22:13:47.575686588 +0000 UTC m=+17.581013978" watchObservedRunningTime="2024-08-05 22:13:51.301748192 +0000 UTC m=+21.307075581" Aug 5 22:13:51.307920 kubelet[3384]: I0805 22:13:51.301976 3384 topology_manager.go:215] "Topology Admit Handler" podUID="5376d10c-6710-406f-a231-fd3afe316ddb" podNamespace="calico-system" podName="calico-typha-84f657f597-92ffh" Aug 5 22:13:51.365240 systemd[1]: Created slice kubepods-besteffort-pod5376d10c_6710_406f_a231_fd3afe316ddb.slice - libcontainer container kubepods-besteffort-pod5376d10c_6710_406f_a231_fd3afe316ddb.slice. Aug 5 22:13:51.371006 kubelet[3384]: I0805 22:13:51.367234 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvlv\" (UniqueName: \"kubernetes.io/projected/5376d10c-6710-406f-a231-fd3afe316ddb-kube-api-access-4jvlv\") pod \"calico-typha-84f657f597-92ffh\" (UID: \"5376d10c-6710-406f-a231-fd3afe316ddb\") " pod="calico-system/calico-typha-84f657f597-92ffh" Aug 5 22:13:51.371006 kubelet[3384]: I0805 22:13:51.367281 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5376d10c-6710-406f-a231-fd3afe316ddb-tigera-ca-bundle\") pod \"calico-typha-84f657f597-92ffh\" (UID: \"5376d10c-6710-406f-a231-fd3afe316ddb\") " pod="calico-system/calico-typha-84f657f597-92ffh" Aug 5 22:13:51.371006 kubelet[3384]: I0805 22:13:51.367311 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5376d10c-6710-406f-a231-fd3afe316ddb-typha-certs\") pod \"calico-typha-84f657f597-92ffh\" (UID: \"5376d10c-6710-406f-a231-fd3afe316ddb\") " pod="calico-system/calico-typha-84f657f597-92ffh" Aug 5 22:13:51.599900 kubelet[3384]: I0805 22:13:51.599749 3384 topology_manager.go:215] "Topology Admit Handler" podUID="ef9b185b-b0c1-4247-9396-c8cb37eda1e3" podNamespace="calico-system" podName="calico-node-8zch7" Aug 5 22:13:51.612665 kubelet[3384]: W0805 22:13:51.606873 3384 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-17-118" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-118' and this object Aug 5 22:13:51.612665 kubelet[3384]: E0805 22:13:51.606914 3384 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-17-118" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-118' and this object Aug 5 22:13:51.612665 kubelet[3384]: W0805 22:13:51.607008 3384 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-17-118" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-118' and this object Aug 5 22:13:51.612665 kubelet[3384]: E0805 22:13:51.607027 3384 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-17-118" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-118' and this object Aug 5 22:13:51.626445 systemd[1]: Created slice kubepods-besteffort-podef9b185b_b0c1_4247_9396_c8cb37eda1e3.slice - libcontainer container kubepods-besteffort-podef9b185b_b0c1_4247_9396_c8cb37eda1e3.slice. Aug 5 22:13:51.671332 kubelet[3384]: I0805 22:13:51.670859 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-var-lib-calico\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671332 kubelet[3384]: I0805 22:13:51.670915 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-tigera-ca-bundle\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671332 kubelet[3384]: I0805 22:13:51.670938 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-cni-bin-dir\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671332 kubelet[3384]: I0805 22:13:51.670961 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-cni-net-dir\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671332 kubelet[3384]: I0805 22:13:51.670985 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-lib-modules\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671742 kubelet[3384]: I0805 22:13:51.671007 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-cni-log-dir\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671742 kubelet[3384]: I0805 22:13:51.671041 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-policysync\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671742 kubelet[3384]: I0805 22:13:51.671066 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-var-run-calico\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671742 kubelet[3384]: I0805 22:13:51.671094 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-node-certs\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.671742 kubelet[3384]: I0805 22:13:51.671131 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-xtables-lock\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.672172 kubelet[3384]: I0805 22:13:51.671153 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-flexvol-driver-host\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.672172 kubelet[3384]: I0805 22:13:51.671186 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqt54\" (UniqueName: \"kubernetes.io/projected/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-kube-api-access-dqt54\") pod \"calico-node-8zch7\" (UID: \"ef9b185b-b0c1-4247-9396-c8cb37eda1e3\") " pod="calico-system/calico-node-8zch7" Aug 5 22:13:51.701155 containerd[1970]: time="2024-08-05T22:13:51.701111407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f657f597-92ffh,Uid:5376d10c-6710-406f-a231-fd3afe316ddb,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:51.715780 kubelet[3384]: I0805 22:13:51.714157 3384 topology_manager.go:215] "Topology Admit Handler" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" podNamespace="calico-system" podName="csi-node-driver-sp2mz" Aug 5 22:13:51.715780 kubelet[3384]: E0805 22:13:51.714757 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:13:51.775311 kubelet[3384]: I0805 22:13:51.774855 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c-socket-dir\") pod \"csi-node-driver-sp2mz\" (UID: \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\") " pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:13:51.775311 kubelet[3384]: I0805 22:13:51.774951 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chdx7\" (UniqueName: \"kubernetes.io/projected/c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c-kube-api-access-chdx7\") pod \"csi-node-driver-sp2mz\" (UID: \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\") " pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:13:51.775311 kubelet[3384]: I0805 22:13:51.775017 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c-varrun\") pod \"csi-node-driver-sp2mz\" (UID: \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\") " pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:13:51.775311 kubelet[3384]: I0805 22:13:51.775044 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c-registration-dir\") pod \"csi-node-driver-sp2mz\" (UID: \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\") " pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:13:51.775311 kubelet[3384]: I0805 22:13:51.775145 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c-kubelet-dir\") pod \"csi-node-driver-sp2mz\" (UID: \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\") " pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:13:51.787858 kubelet[3384]: E0805 22:13:51.787712 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.787858 kubelet[3384]: W0805 22:13:51.787853 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.791791 kubelet[3384]: E0805 22:13:51.789970 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.823775 kubelet[3384]: E0805 22:13:51.823736 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.823775 kubelet[3384]: W0805 22:13:51.823764 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.826046 kubelet[3384]: E0805 22:13:51.823792 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.829259 containerd[1970]: time="2024-08-05T22:13:51.828203284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:51.829776 containerd[1970]: time="2024-08-05T22:13:51.829462136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:51.830221 containerd[1970]: time="2024-08-05T22:13:51.829904312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:51.830964 containerd[1970]: time="2024-08-05T22:13:51.830449884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:51.863240 kubelet[3384]: E0805 22:13:51.861399 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.863240 kubelet[3384]: W0805 22:13:51.861429 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.863240 kubelet[3384]: E0805 22:13:51.861455 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.882688 kubelet[3384]: E0805 22:13:51.882653 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.882865 kubelet[3384]: W0805 22:13:51.882845 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.885662 kubelet[3384]: E0805 22:13:51.882945 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.887808 kubelet[3384]: E0805 22:13:51.887777 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.887980 kubelet[3384]: W0805 22:13:51.887960 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.888078 kubelet[3384]: E0805 22:13:51.888062 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.890904 kubelet[3384]: E0805 22:13:51.890744 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.890904 kubelet[3384]: W0805 22:13:51.890769 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.890904 kubelet[3384]: E0805 22:13:51.890816 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.895187 kubelet[3384]: E0805 22:13:51.894012 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.895187 kubelet[3384]: W0805 22:13:51.894039 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.895187 kubelet[3384]: E0805 22:13:51.894084 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.897086 kubelet[3384]: E0805 22:13:51.897064 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.897181 kubelet[3384]: W0805 22:13:51.897166 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.897293 kubelet[3384]: E0805 22:13:51.897272 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.897735 kubelet[3384]: E0805 22:13:51.897608 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.899679 kubelet[3384]: W0805 22:13:51.899524 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.899679 kubelet[3384]: E0805 22:13:51.899590 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.899898 kubelet[3384]: E0805 22:13:51.899886 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.899972 kubelet[3384]: W0805 22:13:51.899961 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.900072 kubelet[3384]: E0805 22:13:51.900054 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.901332 kubelet[3384]: E0805 22:13:51.900581 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.901332 kubelet[3384]: W0805 22:13:51.900784 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.901332 kubelet[3384]: E0805 22:13:51.900823 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.902675 kubelet[3384]: E0805 22:13:51.902612 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.904243 kubelet[3384]: W0805 22:13:51.903824 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.904243 kubelet[3384]: E0805 22:13:51.904162 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.905714 kubelet[3384]: E0805 22:13:51.905695 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.905979 kubelet[3384]: W0805 22:13:51.905908 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.907664 kubelet[3384]: E0805 22:13:51.906231 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.907858 kubelet[3384]: E0805 22:13:51.907838 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.908128 kubelet[3384]: W0805 22:13:51.907859 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.908272 kubelet[3384]: E0805 22:13:51.908226 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.908525 kubelet[3384]: E0805 22:13:51.908507 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.908589 kubelet[3384]: W0805 22:13:51.908526 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.908796 kubelet[3384]: E0805 22:13:51.908774 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.908887 kubelet[3384]: E0805 22:13:51.908874 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.908936 kubelet[3384]: W0805 22:13:51.908888 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.909689 kubelet[3384]: E0805 22:13:51.909344 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.910833 kubelet[3384]: E0805 22:13:51.910812 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.910918 kubelet[3384]: W0805 22:13:51.910834 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.912157 kubelet[3384]: E0805 22:13:51.911047 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.912157 kubelet[3384]: E0805 22:13:51.911682 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.912157 kubelet[3384]: W0805 22:13:51.911693 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.912907 kubelet[3384]: E0805 22:13:51.912884 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.913467 kubelet[3384]: E0805 22:13:51.913445 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.913542 kubelet[3384]: W0805 22:13:51.913467 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.916257 kubelet[3384]: E0805 22:13:51.913882 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.916257 kubelet[3384]: E0805 22:13:51.915266 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.916257 kubelet[3384]: W0805 22:13:51.915330 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.916257 kubelet[3384]: E0805 22:13:51.916212 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.917037 kubelet[3384]: E0805 22:13:51.917016 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.917037 kubelet[3384]: W0805 22:13:51.917034 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.918257 kubelet[3384]: E0805 22:13:51.917834 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.918257 kubelet[3384]: E0805 22:13:51.918144 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.918257 kubelet[3384]: W0805 22:13:51.918157 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.918257 kubelet[3384]: E0805 22:13:51.918241 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.919587 kubelet[3384]: E0805 22:13:51.918924 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.919587 kubelet[3384]: W0805 22:13:51.918938 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.919587 kubelet[3384]: E0805 22:13:51.919130 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.920660 kubelet[3384]: E0805 22:13:51.919823 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.920660 kubelet[3384]: W0805 22:13:51.919836 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.920660 kubelet[3384]: E0805 22:13:51.919854 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.921907 kubelet[3384]: E0805 22:13:51.921858 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.921907 kubelet[3384]: W0805 22:13:51.921905 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.923664 kubelet[3384]: E0805 22:13:51.922229 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.924043 kubelet[3384]: E0805 22:13:51.924020 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.924205 kubelet[3384]: W0805 22:13:51.924043 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.926062 kubelet[3384]: E0805 22:13:51.925863 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.929669 kubelet[3384]: E0805 22:13:51.929295 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.929669 kubelet[3384]: W0805 22:13:51.929324 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.931090 kubelet[3384]: E0805 22:13:51.931061 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.931190 kubelet[3384]: E0805 22:13:51.931155 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.931190 kubelet[3384]: W0805 22:13:51.931168 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.933665 kubelet[3384]: E0805 22:13:51.932443 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.933665 kubelet[3384]: W0805 22:13:51.932461 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.933665 kubelet[3384]: E0805 22:13:51.932482 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.933665 kubelet[3384]: E0805 22:13:51.932515 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:51.935531 systemd[1]: Started cri-containerd-27da6f979ede00f44a7cc66392c4891ba00f5bfe1ba89855071c9c7c6b3d0b35.scope - libcontainer container 27da6f979ede00f44a7cc66392c4891ba00f5bfe1ba89855071c9c7c6b3d0b35. Aug 5 22:13:51.968667 kubelet[3384]: E0805 22:13:51.967472 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:51.968667 kubelet[3384]: W0805 22:13:51.967522 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:51.968667 kubelet[3384]: E0805 22:13:51.967547 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.012370 kubelet[3384]: E0805 22:13:52.012335 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.012370 kubelet[3384]: W0805 22:13:52.012365 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.012572 kubelet[3384]: E0805 22:13:52.012394 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.090836 containerd[1970]: time="2024-08-05T22:13:52.090794066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f657f597-92ffh,Uid:5376d10c-6710-406f-a231-fd3afe316ddb,Namespace:calico-system,Attempt:0,} returns sandbox id \"27da6f979ede00f44a7cc66392c4891ba00f5bfe1ba89855071c9c7c6b3d0b35\"" Aug 5 22:13:52.096222 containerd[1970]: time="2024-08-05T22:13:52.096181737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:13:52.114276 kubelet[3384]: E0805 22:13:52.114157 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.114276 kubelet[3384]: W0805 22:13:52.114189 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.114276 kubelet[3384]: E0805 22:13:52.114213 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.216472 kubelet[3384]: E0805 22:13:52.216435 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.216777 kubelet[3384]: W0805 22:13:52.216678 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.216777 kubelet[3384]: E0805 22:13:52.216713 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.318768 kubelet[3384]: E0805 22:13:52.318688 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.318768 kubelet[3384]: W0805 22:13:52.318768 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.319278 kubelet[3384]: E0805 22:13:52.318798 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.421754 kubelet[3384]: E0805 22:13:52.421515 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.421754 kubelet[3384]: W0805 22:13:52.421539 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.421754 kubelet[3384]: E0805 22:13:52.421564 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.525200 kubelet[3384]: E0805 22:13:52.524766 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.525200 kubelet[3384]: W0805 22:13:52.524876 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.525200 kubelet[3384]: E0805 22:13:52.525018 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.626842 kubelet[3384]: E0805 22:13:52.626674 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.626842 kubelet[3384]: W0805 22:13:52.626701 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.626842 kubelet[3384]: E0805 22:13:52.626755 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.729182 kubelet[3384]: E0805 22:13:52.729003 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.729182 kubelet[3384]: W0805 22:13:52.729032 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.729182 kubelet[3384]: E0805 22:13:52.729059 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.788261 kubelet[3384]: E0805 22:13:52.787745 3384 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Aug 5 22:13:52.788261 kubelet[3384]: E0805 22:13:52.787887 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-node-certs podName:ef9b185b-b0c1-4247-9396-c8cb37eda1e3 nodeName:}" failed. No retries permitted until 2024-08-05 22:13:53.287854284 +0000 UTC m=+23.293181667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/ef9b185b-b0c1-4247-9396-c8cb37eda1e3-node-certs") pod "calico-node-8zch7" (UID: "ef9b185b-b0c1-4247-9396-c8cb37eda1e3") : failed to sync secret cache: timed out waiting for the condition Aug 5 22:13:52.831622 kubelet[3384]: E0805 22:13:52.831587 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.831622 kubelet[3384]: W0805 22:13:52.831612 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.831951 kubelet[3384]: E0805 22:13:52.831662 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.932743 kubelet[3384]: E0805 22:13:52.932713 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.932969 kubelet[3384]: W0805 22:13:52.932881 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.932969 kubelet[3384]: E0805 22:13:52.932911 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.039095 kubelet[3384]: E0805 22:13:53.038939 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.039095 kubelet[3384]: W0805 22:13:53.038968 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.039095 kubelet[3384]: E0805 22:13:53.038998 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.140506 kubelet[3384]: E0805 22:13:53.140377 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.140506 kubelet[3384]: W0805 22:13:53.140406 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.140506 kubelet[3384]: E0805 22:13:53.140433 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.242295 kubelet[3384]: E0805 22:13:53.242227 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.242295 kubelet[3384]: W0805 22:13:53.242290 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.243041 kubelet[3384]: E0805 22:13:53.242321 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.294106 kubelet[3384]: E0805 22:13:53.293954 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:13:53.343572 kubelet[3384]: E0805 22:13:53.343530 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.343572 kubelet[3384]: W0805 22:13:53.343561 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.346353 kubelet[3384]: E0805 22:13:53.343587 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.348674 kubelet[3384]: E0805 22:13:53.346760 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.348674 kubelet[3384]: W0805 22:13:53.346787 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.348674 kubelet[3384]: E0805 22:13:53.346819 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.348674 kubelet[3384]: E0805 22:13:53.347239 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.348674 kubelet[3384]: W0805 22:13:53.347256 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.348674 kubelet[3384]: E0805 22:13:53.347276 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.348674 kubelet[3384]: E0805 22:13:53.347539 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.348674 kubelet[3384]: W0805 22:13:53.347552 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.348674 kubelet[3384]: E0805 22:13:53.347566 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.349921 kubelet[3384]: E0805 22:13:53.349896 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.349921 kubelet[3384]: W0805 22:13:53.349921 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.350123 kubelet[3384]: E0805 22:13:53.349941 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.375074 kubelet[3384]: E0805 22:13:53.373658 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.375074 kubelet[3384]: W0805 22:13:53.373687 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.375074 kubelet[3384]: E0805 22:13:53.373715 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.437041 containerd[1970]: time="2024-08-05T22:13:53.436342951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8zch7,Uid:ef9b185b-b0c1-4247-9396-c8cb37eda1e3,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:53.591226 containerd[1970]: time="2024-08-05T22:13:53.580729005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:53.591226 containerd[1970]: time="2024-08-05T22:13:53.580833670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:53.591226 containerd[1970]: time="2024-08-05T22:13:53.580868784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:53.591226 containerd[1970]: time="2024-08-05T22:13:53.580890881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:53.634022 systemd[1]: run-containerd-runc-k8s.io-f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4-runc.TRXVIs.mount: Deactivated successfully. Aug 5 22:13:53.647014 systemd[1]: Started cri-containerd-f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4.scope - libcontainer container f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4. Aug 5 22:13:53.743214 containerd[1970]: time="2024-08-05T22:13:53.743162989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8zch7,Uid:ef9b185b-b0c1-4247-9396-c8cb37eda1e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\"" Aug 5 22:13:55.290705 kubelet[3384]: E0805 22:13:55.289822 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:13:55.329235 containerd[1970]: time="2024-08-05T22:13:55.329189234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:55.331627 containerd[1970]: time="2024-08-05T22:13:55.331182176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:13:55.333352 containerd[1970]: time="2024-08-05T22:13:55.333313754Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:55.338119 containerd[1970]: time="2024-08-05T22:13:55.337939418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:55.340907 containerd[1970]: time="2024-08-05T22:13:55.340163945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.243931484s" Aug 5 22:13:55.340907 containerd[1970]: time="2024-08-05T22:13:55.340219146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:13:55.343261 containerd[1970]: time="2024-08-05T22:13:55.341902514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:13:55.375569 containerd[1970]: time="2024-08-05T22:13:55.375442303Z" level=info msg="CreateContainer within sandbox \"27da6f979ede00f44a7cc66392c4891ba00f5bfe1ba89855071c9c7c6b3d0b35\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:13:55.409864 containerd[1970]: time="2024-08-05T22:13:55.409808619Z" level=info msg="CreateContainer within sandbox \"27da6f979ede00f44a7cc66392c4891ba00f5bfe1ba89855071c9c7c6b3d0b35\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a66f7688e08020109d60535e44a353fe072133c9b06e7753f5e5b0f738647f97\"" Aug 5 22:13:55.411700 containerd[1970]: time="2024-08-05T22:13:55.411662398Z" level=info msg="StartContainer for \"a66f7688e08020109d60535e44a353fe072133c9b06e7753f5e5b0f738647f97\"" Aug 5 22:13:55.630875 systemd[1]: Started cri-containerd-a66f7688e08020109d60535e44a353fe072133c9b06e7753f5e5b0f738647f97.scope - libcontainer container a66f7688e08020109d60535e44a353fe072133c9b06e7753f5e5b0f738647f97. Aug 5 22:13:55.735164 containerd[1970]: time="2024-08-05T22:13:55.734467260Z" level=info msg="StartContainer for \"a66f7688e08020109d60535e44a353fe072133c9b06e7753f5e5b0f738647f97\" returns successfully" Aug 5 22:13:56.358231 systemd[1]: run-containerd-runc-k8s.io-a66f7688e08020109d60535e44a353fe072133c9b06e7753f5e5b0f738647f97-runc.2oDzxF.mount: Deactivated successfully. Aug 5 22:13:56.657126 kubelet[3384]: I0805 22:13:56.655308 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84f657f597-92ffh" podStartSLOduration=2.407858208 podStartE2EDuration="5.655283172s" podCreationTimestamp="2024-08-05 22:13:51 +0000 UTC" firstStartedPulling="2024-08-05 22:13:52.093774574 +0000 UTC m=+22.099101946" lastFinishedPulling="2024-08-05 22:13:55.341199543 +0000 UTC m=+25.346526910" observedRunningTime="2024-08-05 22:13:56.648913885 +0000 UTC m=+26.654241272" watchObservedRunningTime="2024-08-05 22:13:56.655283172 +0000 UTC m=+26.660610562" Aug 5 22:13:56.703110 kubelet[3384]: E0805 22:13:56.703075 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.703376 kubelet[3384]: W0805 22:13:56.703106 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.703376 kubelet[3384]: E0805 22:13:56.703158 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.704298 kubelet[3384]: E0805 22:13:56.703807 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.704298 kubelet[3384]: W0805 22:13:56.703827 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.704298 kubelet[3384]: E0805 22:13:56.703869 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.704298 kubelet[3384]: E0805 22:13:56.704262 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.704298 kubelet[3384]: W0805 22:13:56.704274 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.704794 kubelet[3384]: E0805 22:13:56.704299 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.704794 kubelet[3384]: E0805 22:13:56.704722 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.704794 kubelet[3384]: W0805 22:13:56.704738 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.704794 kubelet[3384]: E0805 22:13:56.704752 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.705636 kubelet[3384]: E0805 22:13:56.705040 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.705636 kubelet[3384]: W0805 22:13:56.705053 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.705636 kubelet[3384]: E0805 22:13:56.705066 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.705636 kubelet[3384]: E0805 22:13:56.705436 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.705636 kubelet[3384]: W0805 22:13:56.705465 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.705636 kubelet[3384]: E0805 22:13:56.705478 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.706069 kubelet[3384]: E0805 22:13:56.705916 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.706069 kubelet[3384]: W0805 22:13:56.705937 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.706069 kubelet[3384]: E0805 22:13:56.705952 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.706305 kubelet[3384]: E0805 22:13:56.706279 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.706305 kubelet[3384]: W0805 22:13:56.706290 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.706305 kubelet[3384]: E0805 22:13:56.706303 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.707679 kubelet[3384]: E0805 22:13:56.706607 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.707679 kubelet[3384]: W0805 22:13:56.706621 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.707679 kubelet[3384]: E0805 22:13:56.706634 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.707679 kubelet[3384]: E0805 22:13:56.707257 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.707679 kubelet[3384]: W0805 22:13:56.707271 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.707679 kubelet[3384]: E0805 22:13:56.707379 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.708184 kubelet[3384]: E0805 22:13:56.707707 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.708184 kubelet[3384]: W0805 22:13:56.707719 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.708184 kubelet[3384]: E0805 22:13:56.707732 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.708184 kubelet[3384]: E0805 22:13:56.708106 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.708184 kubelet[3384]: W0805 22:13:56.708118 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.708184 kubelet[3384]: E0805 22:13:56.708131 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.708456 kubelet[3384]: E0805 22:13:56.708397 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.708456 kubelet[3384]: W0805 22:13:56.708407 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.708456 kubelet[3384]: E0805 22:13:56.708421 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.709686 kubelet[3384]: E0805 22:13:56.708729 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.709686 kubelet[3384]: W0805 22:13:56.708744 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.709686 kubelet[3384]: E0805 22:13:56.708759 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.709686 kubelet[3384]: E0805 22:13:56.709008 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.709686 kubelet[3384]: W0805 22:13:56.709019 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.709686 kubelet[3384]: E0805 22:13:56.709050 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.788421 kubelet[3384]: E0805 22:13:56.788342 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.788421 kubelet[3384]: W0805 22:13:56.788382 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.788915 kubelet[3384]: E0805 22:13:56.788698 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.789230 kubelet[3384]: E0805 22:13:56.789104 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.789230 kubelet[3384]: W0805 22:13:56.789153 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.789230 kubelet[3384]: E0805 22:13:56.789172 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.791813 kubelet[3384]: E0805 22:13:56.791492 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.791813 kubelet[3384]: W0805 22:13:56.791532 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.791813 kubelet[3384]: E0805 22:13:56.791604 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.792362 kubelet[3384]: E0805 22:13:56.792210 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.792362 kubelet[3384]: W0805 22:13:56.792227 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.792362 kubelet[3384]: E0805 22:13:56.792245 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.793818 kubelet[3384]: E0805 22:13:56.793781 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.793818 kubelet[3384]: W0805 22:13:56.793798 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.794149 kubelet[3384]: E0805 22:13:56.793987 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.794315 kubelet[3384]: E0805 22:13:56.794303 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.794585 kubelet[3384]: W0805 22:13:56.794456 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.794585 kubelet[3384]: E0805 22:13:56.794493 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.795243 kubelet[3384]: E0805 22:13:56.795228 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.796381 kubelet[3384]: W0805 22:13:56.795333 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.797155 kubelet[3384]: E0805 22:13:56.796539 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.797155 kubelet[3384]: W0805 22:13:56.796555 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.798636 kubelet[3384]: E0805 22:13:56.798148 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.798636 kubelet[3384]: E0805 22:13:56.798455 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.799334 kubelet[3384]: E0805 22:13:56.799305 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.799416 kubelet[3384]: W0805 22:13:56.799335 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.799416 kubelet[3384]: E0805 22:13:56.799393 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.801078 kubelet[3384]: E0805 22:13:56.800710 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.801078 kubelet[3384]: W0805 22:13:56.800725 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.801078 kubelet[3384]: E0805 22:13:56.800773 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.801438 kubelet[3384]: E0805 22:13:56.801419 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.801508 kubelet[3384]: W0805 22:13:56.801438 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.801556 kubelet[3384]: E0805 22:13:56.801526 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.801881 kubelet[3384]: E0805 22:13:56.801832 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.801946 kubelet[3384]: W0805 22:13:56.801890 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.802435 kubelet[3384]: E0805 22:13:56.802228 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.802556 kubelet[3384]: E0805 22:13:56.802542 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.802747 kubelet[3384]: W0805 22:13:56.802557 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.802747 kubelet[3384]: E0805 22:13:56.802697 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.803298 kubelet[3384]: E0805 22:13:56.803280 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.803298 kubelet[3384]: W0805 22:13:56.803295 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.804734 kubelet[3384]: E0805 22:13:56.803315 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.810242 kubelet[3384]: E0805 22:13:56.810204 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.810242 kubelet[3384]: W0805 22:13:56.810238 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.811491 kubelet[3384]: E0805 22:13:56.810267 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.811491 kubelet[3384]: E0805 22:13:56.810835 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.811491 kubelet[3384]: W0805 22:13:56.810850 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.811491 kubelet[3384]: E0805 22:13:56.810893 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.816660 kubelet[3384]: E0805 22:13:56.816437 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.820878 kubelet[3384]: W0805 22:13:56.819469 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.825229 kubelet[3384]: E0805 22:13:56.821774 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:56.837532 kubelet[3384]: E0805 22:13:56.827896 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:56.837532 kubelet[3384]: W0805 22:13:56.828887 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:56.837532 kubelet[3384]: E0805 22:13:56.836696 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.167962 containerd[1970]: time="2024-08-05T22:13:57.167911643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:57.170928 containerd[1970]: time="2024-08-05T22:13:57.170817213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:13:57.173903 containerd[1970]: time="2024-08-05T22:13:57.173809940Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:57.180101 containerd[1970]: time="2024-08-05T22:13:57.180047752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:57.183002 containerd[1970]: time="2024-08-05T22:13:57.182767720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.840810323s" Aug 5 22:13:57.183002 containerd[1970]: time="2024-08-05T22:13:57.182820638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:13:57.224928 containerd[1970]: time="2024-08-05T22:13:57.224882169Z" level=info msg="CreateContainer within sandbox \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:13:57.268104 containerd[1970]: time="2024-08-05T22:13:57.267952973Z" level=info msg="CreateContainer within sandbox \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215\"" Aug 5 22:13:57.270577 containerd[1970]: time="2024-08-05T22:13:57.270530517Z" level=info msg="StartContainer for \"8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215\"" Aug 5 22:13:57.290221 kubelet[3384]: E0805 22:13:57.290170 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:13:57.372318 systemd[1]: Started cri-containerd-8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215.scope - libcontainer container 8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215. Aug 5 22:13:57.524510 containerd[1970]: time="2024-08-05T22:13:57.524332485Z" level=info msg="StartContainer for \"8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215\" returns successfully" Aug 5 22:13:57.575162 systemd[1]: cri-containerd-8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215.scope: Deactivated successfully. Aug 5 22:13:57.634101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215-rootfs.mount: Deactivated successfully. Aug 5 22:13:57.653540 kubelet[3384]: I0805 22:13:57.651337 3384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:13:58.219663 containerd[1970]: time="2024-08-05T22:13:58.205911540Z" level=info msg="shim disconnected" id=8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215 namespace=k8s.io Aug 5 22:13:58.220275 containerd[1970]: time="2024-08-05T22:13:58.219774707Z" level=warning msg="cleaning up after shim disconnected" id=8ff49447d92ddccaebf0016de7598b9ac1fbf48ac09ed08319847d922480b215 namespace=k8s.io Aug 5 22:13:58.220275 containerd[1970]: time="2024-08-05T22:13:58.219802676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:13:58.671623 containerd[1970]: time="2024-08-05T22:13:58.666873159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:13:59.290516 kubelet[3384]: E0805 22:13:59.290354 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:01.291575 kubelet[3384]: E0805 22:14:01.291463 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:03.290316 kubelet[3384]: E0805 22:14:03.289945 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:05.300552 kubelet[3384]: E0805 22:14:05.297801 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:06.647363 containerd[1970]: time="2024-08-05T22:14:06.647224891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:06.650745 containerd[1970]: time="2024-08-05T22:14:06.649854955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:14:06.652993 containerd[1970]: time="2024-08-05T22:14:06.652943384Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:06.669453 containerd[1970]: time="2024-08-05T22:14:06.669209206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:06.674004 containerd[1970]: time="2024-08-05T22:14:06.673954951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 8.007033485s" Aug 5 22:14:06.674004 containerd[1970]: time="2024-08-05T22:14:06.674001137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:14:06.677545 containerd[1970]: time="2024-08-05T22:14:06.677500338Z" level=info msg="CreateContainer within sandbox \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:14:06.730224 containerd[1970]: time="2024-08-05T22:14:06.730174437Z" level=info msg="CreateContainer within sandbox \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a\"" Aug 5 22:14:06.731195 containerd[1970]: time="2024-08-05T22:14:06.731034833Z" level=info msg="StartContainer for \"4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a\"" Aug 5 22:14:06.868097 systemd[1]: Started cri-containerd-4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a.scope - libcontainer container 4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a. Aug 5 22:14:06.940843 containerd[1970]: time="2024-08-05T22:14:06.940198397Z" level=info msg="StartContainer for \"4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a\" returns successfully" Aug 5 22:14:07.289862 kubelet[3384]: E0805 22:14:07.289811 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:09.291420 kubelet[3384]: E0805 22:14:09.291365 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:10.315676 systemd[1]: cri-containerd-4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a.scope: Deactivated successfully. Aug 5 22:14:10.379855 kubelet[3384]: I0805 22:14:10.379809 3384 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:14:10.384450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a-rootfs.mount: Deactivated successfully. Aug 5 22:14:10.387705 containerd[1970]: time="2024-08-05T22:14:10.387473169Z" level=info msg="shim disconnected" id=4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a namespace=k8s.io Aug 5 22:14:10.387705 containerd[1970]: time="2024-08-05T22:14:10.387547389Z" level=warning msg="cleaning up after shim disconnected" id=4c209fabcda3f6f2b3d73203400965226799f6f35840599db446844bee3ae34a namespace=k8s.io Aug 5 22:14:10.387705 containerd[1970]: time="2024-08-05T22:14:10.387562462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:14:10.484450 kubelet[3384]: I0805 22:14:10.484331 3384 topology_manager.go:215] "Topology Admit Handler" podUID="8bb515aa-fe24-4eff-89f5-c6780d1b60c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wm525" Aug 5 22:14:10.510741 kubelet[3384]: I0805 22:14:10.509831 3384 topology_manager.go:215] "Topology Admit Handler" podUID="64251e71-0ce8-4f60-9537-431748961741" podNamespace="calico-system" podName="calico-kube-controllers-b54bf7c66-tr2nl" Aug 5 22:14:10.511821 kubelet[3384]: I0805 22:14:10.511779 3384 topology_manager.go:215] "Topology Admit Handler" podUID="ec49b585-a6b6-451a-9d12-04277915267d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2zqvt" Aug 5 22:14:10.518213 systemd[1]: Created slice kubepods-burstable-pod8bb515aa_fe24_4eff_89f5_c6780d1b60c8.slice - libcontainer container kubepods-burstable-pod8bb515aa_fe24_4eff_89f5_c6780d1b60c8.slice. Aug 5 22:14:10.534877 systemd[1]: Created slice kubepods-besteffort-pod64251e71_0ce8_4f60_9537_431748961741.slice - libcontainer container kubepods-besteffort-pod64251e71_0ce8_4f60_9537_431748961741.slice. Aug 5 22:14:10.551479 systemd[1]: Created slice kubepods-burstable-podec49b585_a6b6_451a_9d12_04277915267d.slice - libcontainer container kubepods-burstable-podec49b585_a6b6_451a_9d12_04277915267d.slice. Aug 5 22:14:10.669260 kubelet[3384]: I0805 22:14:10.668407 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec49b585-a6b6-451a-9d12-04277915267d-config-volume\") pod \"coredns-7db6d8ff4d-2zqvt\" (UID: \"ec49b585-a6b6-451a-9d12-04277915267d\") " pod="kube-system/coredns-7db6d8ff4d-2zqvt" Aug 5 22:14:10.669260 kubelet[3384]: I0805 22:14:10.668530 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cdbf\" (UniqueName: \"kubernetes.io/projected/64251e71-0ce8-4f60-9537-431748961741-kube-api-access-8cdbf\") pod \"calico-kube-controllers-b54bf7c66-tr2nl\" (UID: \"64251e71-0ce8-4f60-9537-431748961741\") " pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" Aug 5 22:14:10.669260 kubelet[3384]: I0805 22:14:10.668566 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bb515aa-fe24-4eff-89f5-c6780d1b60c8-config-volume\") pod \"coredns-7db6d8ff4d-wm525\" (UID: \"8bb515aa-fe24-4eff-89f5-c6780d1b60c8\") " pod="kube-system/coredns-7db6d8ff4d-wm525" Aug 5 22:14:10.669260 kubelet[3384]: I0805 22:14:10.668602 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwkt\" (UniqueName: \"kubernetes.io/projected/ec49b585-a6b6-451a-9d12-04277915267d-kube-api-access-dxwkt\") pod \"coredns-7db6d8ff4d-2zqvt\" (UID: \"ec49b585-a6b6-451a-9d12-04277915267d\") " pod="kube-system/coredns-7db6d8ff4d-2zqvt" Aug 5 22:14:10.669260 kubelet[3384]: I0805 22:14:10.668628 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64251e71-0ce8-4f60-9537-431748961741-tigera-ca-bundle\") pod \"calico-kube-controllers-b54bf7c66-tr2nl\" (UID: \"64251e71-0ce8-4f60-9537-431748961741\") " pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" Aug 5 22:14:10.669501 kubelet[3384]: I0805 22:14:10.668677 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbsc2\" (UniqueName: \"kubernetes.io/projected/8bb515aa-fe24-4eff-89f5-c6780d1b60c8-kube-api-access-tbsc2\") pod \"coredns-7db6d8ff4d-wm525\" (UID: \"8bb515aa-fe24-4eff-89f5-c6780d1b60c8\") " pod="kube-system/coredns-7db6d8ff4d-wm525" Aug 5 22:14:10.729202 containerd[1970]: time="2024-08-05T22:14:10.729061219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:14:10.859214 containerd[1970]: time="2024-08-05T22:14:10.858581432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2zqvt,Uid:ec49b585-a6b6-451a-9d12-04277915267d,Namespace:kube-system,Attempt:0,}" Aug 5 22:14:11.122506 containerd[1970]: time="2024-08-05T22:14:11.122440815Z" level=error msg="Failed to destroy network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.126497 containerd[1970]: time="2024-08-05T22:14:11.126431513Z" level=error msg="encountered an error cleaning up failed sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.126840 containerd[1970]: time="2024-08-05T22:14:11.126530722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2zqvt,Uid:ec49b585-a6b6-451a-9d12-04277915267d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.126975 kubelet[3384]: E0805 22:14:11.126793 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.126975 kubelet[3384]: E0805 22:14:11.126880 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2zqvt" Aug 5 22:14:11.126975 kubelet[3384]: E0805 22:14:11.126908 3384 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2zqvt" Aug 5 22:14:11.127398 kubelet[3384]: E0805 22:14:11.126969 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2zqvt_kube-system(ec49b585-a6b6-451a-9d12-04277915267d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2zqvt_kube-system(ec49b585-a6b6-451a-9d12-04277915267d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2zqvt" podUID="ec49b585-a6b6-451a-9d12-04277915267d" Aug 5 22:14:11.129739 containerd[1970]: time="2024-08-05T22:14:11.129600783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wm525,Uid:8bb515aa-fe24-4eff-89f5-c6780d1b60c8,Namespace:kube-system,Attempt:0,}" Aug 5 22:14:11.141353 containerd[1970]: time="2024-08-05T22:14:11.141311158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b54bf7c66-tr2nl,Uid:64251e71-0ce8-4f60-9537-431748961741,Namespace:calico-system,Attempt:0,}" Aug 5 22:14:11.303212 systemd[1]: Created slice kubepods-besteffort-podc04d2f71_56dd_4ddf_be4f_eac15a3c0c8c.slice - libcontainer container kubepods-besteffort-podc04d2f71_56dd_4ddf_be4f_eac15a3c0c8c.slice. Aug 5 22:14:11.311738 containerd[1970]: time="2024-08-05T22:14:11.311565458Z" level=error msg="Failed to destroy network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.312193 containerd[1970]: time="2024-08-05T22:14:11.312026823Z" level=error msg="encountered an error cleaning up failed sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.312193 containerd[1970]: time="2024-08-05T22:14:11.312105820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wm525,Uid:8bb515aa-fe24-4eff-89f5-c6780d1b60c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.312570 containerd[1970]: time="2024-08-05T22:14:11.312434845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sp2mz,Uid:c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c,Namespace:calico-system,Attempt:0,}" Aug 5 22:14:11.313248 kubelet[3384]: E0805 22:14:11.313048 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.313248 kubelet[3384]: E0805 22:14:11.313109 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wm525" Aug 5 22:14:11.313248 kubelet[3384]: E0805 22:14:11.313136 3384 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wm525" Aug 5 22:14:11.313403 kubelet[3384]: E0805 22:14:11.313186 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wm525_kube-system(8bb515aa-fe24-4eff-89f5-c6780d1b60c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wm525_kube-system(8bb515aa-fe24-4eff-89f5-c6780d1b60c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wm525" podUID="8bb515aa-fe24-4eff-89f5-c6780d1b60c8" Aug 5 22:14:11.331287 containerd[1970]: time="2024-08-05T22:14:11.330666880Z" level=error msg="Failed to destroy network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.331287 containerd[1970]: time="2024-08-05T22:14:11.331024426Z" level=error msg="encountered an error cleaning up failed sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.331287 containerd[1970]: time="2024-08-05T22:14:11.331084086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b54bf7c66-tr2nl,Uid:64251e71-0ce8-4f60-9537-431748961741,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.332175 kubelet[3384]: E0805 22:14:11.331302 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.332175 kubelet[3384]: E0805 22:14:11.331361 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" Aug 5 22:14:11.332175 kubelet[3384]: E0805 22:14:11.331386 3384 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" Aug 5 22:14:11.332322 kubelet[3384]: E0805 22:14:11.331440 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b54bf7c66-tr2nl_calico-system(64251e71-0ce8-4f60-9537-431748961741)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b54bf7c66-tr2nl_calico-system(64251e71-0ce8-4f60-9537-431748961741)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" podUID="64251e71-0ce8-4f60-9537-431748961741" Aug 5 22:14:11.380816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d-shm.mount: Deactivated successfully. Aug 5 22:14:11.432066 containerd[1970]: time="2024-08-05T22:14:11.432011559Z" level=error msg="Failed to destroy network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.434155 containerd[1970]: time="2024-08-05T22:14:11.434073302Z" level=error msg="encountered an error cleaning up failed sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.434276 containerd[1970]: time="2024-08-05T22:14:11.434160427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sp2mz,Uid:c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.434504 kubelet[3384]: E0805 22:14:11.434463 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:11.434863 kubelet[3384]: E0805 22:14:11.434572 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:14:11.434863 kubelet[3384]: E0805 22:14:11.434602 3384 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sp2mz" Aug 5 22:14:11.434863 kubelet[3384]: E0805 22:14:11.434699 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sp2mz_calico-system(c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sp2mz_calico-system(c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:11.436591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d-shm.mount: Deactivated successfully. Aug 5 22:14:11.731192 kubelet[3384]: I0805 22:14:11.730614 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:11.736966 kubelet[3384]: I0805 22:14:11.735499 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:11.741399 containerd[1970]: time="2024-08-05T22:14:11.740786563Z" level=info msg="StopPodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\"" Aug 5 22:14:11.741399 containerd[1970]: time="2024-08-05T22:14:11.741128027Z" level=info msg="Ensure that sandbox 101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad in task-service has been cleanup successfully" Aug 5 22:14:11.752871 containerd[1970]: time="2024-08-05T22:14:11.752357722Z" level=info msg="StopPodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\"" Aug 5 22:14:11.753471 containerd[1970]: time="2024-08-05T22:14:11.753330261Z" level=info msg="Ensure that sandbox 42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d in task-service has been cleanup successfully" Aug 5 22:14:11.754770 kubelet[3384]: I0805 22:14:11.754062 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:11.755962 containerd[1970]: time="2024-08-05T22:14:11.755924587Z" level=info msg="StopPodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\"" Aug 5 22:14:11.759627 containerd[1970]: time="2024-08-05T22:14:11.759491981Z" level=info msg="Ensure that sandbox a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde in task-service has been cleanup successfully" Aug 5 22:14:11.764422 kubelet[3384]: I0805 22:14:11.764382 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:11.772745 containerd[1970]: time="2024-08-05T22:14:11.768979471Z" level=info msg="StopPodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\"" Aug 5 22:14:11.773666 containerd[1970]: time="2024-08-05T22:14:11.772886347Z" level=info msg="Ensure that sandbox 4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d in task-service has been cleanup successfully" Aug 5 22:14:12.007600 containerd[1970]: time="2024-08-05T22:14:12.005836325Z" level=error msg="StopPodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" failed" error="failed to destroy network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:12.007801 kubelet[3384]: E0805 22:14:12.006102 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:12.007801 kubelet[3384]: E0805 22:14:12.006163 3384 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad"} Aug 5 22:14:12.007801 kubelet[3384]: E0805 22:14:12.006245 3384 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64251e71-0ce8-4f60-9537-431748961741\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:12.007801 kubelet[3384]: E0805 22:14:12.006273 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64251e71-0ce8-4f60-9537-431748961741\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" podUID="64251e71-0ce8-4f60-9537-431748961741" Aug 5 22:14:12.018262 containerd[1970]: time="2024-08-05T22:14:12.017238492Z" level=error msg="StopPodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" failed" error="failed to destroy network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:12.018262 containerd[1970]: time="2024-08-05T22:14:12.017802281Z" level=error msg="StopPodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" failed" error="failed to destroy network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:12.018993 kubelet[3384]: E0805 22:14:12.017602 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:12.018993 kubelet[3384]: E0805 22:14:12.017939 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:12.018993 kubelet[3384]: E0805 22:14:12.017997 3384 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde"} Aug 5 22:14:12.018993 kubelet[3384]: E0805 22:14:12.018044 3384 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8bb515aa-fe24-4eff-89f5-c6780d1b60c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:12.019274 containerd[1970]: time="2024-08-05T22:14:12.018501304Z" level=error msg="StopPodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" failed" error="failed to destroy network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:12.019531 kubelet[3384]: E0805 22:14:12.018076 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8bb515aa-fe24-4eff-89f5-c6780d1b60c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wm525" podUID="8bb515aa-fe24-4eff-89f5-c6780d1b60c8" Aug 5 22:14:12.019531 kubelet[3384]: E0805 22:14:12.018127 3384 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d"} Aug 5 22:14:12.019531 kubelet[3384]: E0805 22:14:12.018157 3384 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec49b585-a6b6-451a-9d12-04277915267d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:12.019531 kubelet[3384]: E0805 22:14:12.018182 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec49b585-a6b6-451a-9d12-04277915267d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2zqvt" podUID="ec49b585-a6b6-451a-9d12-04277915267d" Aug 5 22:14:12.019884 kubelet[3384]: E0805 22:14:12.018724 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:12.019884 kubelet[3384]: E0805 22:14:12.018782 3384 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d"} Aug 5 22:14:12.019884 kubelet[3384]: E0805 22:14:12.018829 3384 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:12.019884 kubelet[3384]: E0805 22:14:12.018858 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sp2mz" podUID="c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c" Aug 5 22:14:14.784053 kubelet[3384]: I0805 22:14:14.783516 3384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:14:19.154882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858376762.mount: Deactivated successfully. Aug 5 22:14:19.379982 containerd[1970]: time="2024-08-05T22:14:19.346511417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:14:19.423471 containerd[1970]: time="2024-08-05T22:14:19.421624638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.676261835s" Aug 5 22:14:19.426433 containerd[1970]: time="2024-08-05T22:14:19.426281193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:19.442120 containerd[1970]: time="2024-08-05T22:14:19.442057128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:14:19.530764 containerd[1970]: time="2024-08-05T22:14:19.530720792Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:19.532327 containerd[1970]: time="2024-08-05T22:14:19.532269911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:19.587340 containerd[1970]: time="2024-08-05T22:14:19.587295723Z" level=info msg="CreateContainer within sandbox \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:14:19.926097 containerd[1970]: time="2024-08-05T22:14:19.926044597Z" level=info msg="CreateContainer within sandbox \"f8f80761499a2a6621a13e2f7a7874ba1310628edb7ad8ac31d9dac27e8214e4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5fbd2029b85f86af81ed82666947e072dfb5c52e61a403e471d56dfa7587c05c\"" Aug 5 22:14:19.935297 containerd[1970]: time="2024-08-05T22:14:19.935111617Z" level=info msg="StartContainer for \"5fbd2029b85f86af81ed82666947e072dfb5c52e61a403e471d56dfa7587c05c\"" Aug 5 22:14:20.342346 systemd[1]: Started cri-containerd-5fbd2029b85f86af81ed82666947e072dfb5c52e61a403e471d56dfa7587c05c.scope - libcontainer container 5fbd2029b85f86af81ed82666947e072dfb5c52e61a403e471d56dfa7587c05c. Aug 5 22:14:20.464130 containerd[1970]: time="2024-08-05T22:14:20.462987681Z" level=info msg="StartContainer for \"5fbd2029b85f86af81ed82666947e072dfb5c52e61a403e471d56dfa7587c05c\" returns successfully" Aug 5 22:14:20.879849 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:14:20.880745 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:14:21.171070 systemd[1]: Started sshd@7-172.31.17.118:22-139.178.89.65:39626.service - OpenSSH per-connection server daemon (139.178.89.65:39626). Aug 5 22:14:21.201948 kubelet[3384]: I0805 22:14:21.201868 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8zch7" podStartSLOduration=4.360974568 podStartE2EDuration="30.145344822s" podCreationTimestamp="2024-08-05 22:13:51 +0000 UTC" firstStartedPulling="2024-08-05 22:13:53.746691467 +0000 UTC m=+23.752018840" lastFinishedPulling="2024-08-05 22:14:19.531061715 +0000 UTC m=+49.536389094" observedRunningTime="2024-08-05 22:14:21.14437919 +0000 UTC m=+51.149706577" watchObservedRunningTime="2024-08-05 22:14:21.145344822 +0000 UTC m=+51.150672207" Aug 5 22:14:21.526739 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 39626 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:21.529232 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:21.543686 systemd-logind[1946]: New session 8 of user core. Aug 5 22:14:21.550021 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:14:21.918543 sshd[4362]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:21.924114 systemd[1]: sshd@7-172.31.17.118:22-139.178.89.65:39626.service: Deactivated successfully. Aug 5 22:14:21.927331 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:14:21.931158 systemd-logind[1946]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:14:21.933820 systemd-logind[1946]: Removed session 8. Aug 5 22:14:23.291665 containerd[1970]: time="2024-08-05T22:14:23.291011225Z" level=info msg="StopPodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\"" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.485 [INFO][4504] k8s.go 608: Cleaning up netns ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.486 [INFO][4504] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" iface="eth0" netns="/var/run/netns/cni-81a5c9fb-d299-6de4-408a-880791ad00d8" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.486 [INFO][4504] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" iface="eth0" netns="/var/run/netns/cni-81a5c9fb-d299-6de4-408a-880791ad00d8" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.487 [INFO][4504] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" iface="eth0" netns="/var/run/netns/cni-81a5c9fb-d299-6de4-408a-880791ad00d8" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.487 [INFO][4504] k8s.go 615: Releasing IP address(es) ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.487 [INFO][4504] utils.go 188: Calico CNI releasing IP address ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.741 [INFO][4544] ipam_plugin.go 411: Releasing address using handleID ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.742 [INFO][4544] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.743 [INFO][4544] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.764 [WARNING][4544] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.764 [INFO][4544] ipam_plugin.go 439: Releasing address using workloadID ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.790 [INFO][4544] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:23.800444 containerd[1970]: 2024-08-05 22:14:23.794 [INFO][4504] k8s.go 621: Teardown processing complete. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:23.804430 containerd[1970]: time="2024-08-05T22:14:23.802913873Z" level=info msg="TearDown network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" successfully" Aug 5 22:14:23.804430 containerd[1970]: time="2024-08-05T22:14:23.802986665Z" level=info msg="StopPodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" returns successfully" Aug 5 22:14:23.808770 containerd[1970]: time="2024-08-05T22:14:23.807169117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sp2mz,Uid:c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c,Namespace:calico-system,Attempt:1,}" Aug 5 22:14:23.807731 systemd[1]: run-netns-cni\x2d81a5c9fb\x2dd299\x2d6de4\x2d408a\x2d880791ad00d8.mount: Deactivated successfully. Aug 5 22:14:24.389171 systemd-networkd[1807]: cali145ce3d4012: Link UP Aug 5 22:14:24.391892 systemd-networkd[1807]: cali145ce3d4012: Gained carrier Aug 5 22:14:24.400192 (udev-worker)[4582]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.113 [INFO][4556] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.143 [INFO][4556] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0 csi-node-driver- calico-system c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c 751 0 2024-08-05 22:13:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-17-118 csi-node-driver-sp2mz eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali145ce3d4012 [] []}} ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.143 [INFO][4556] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.239 [INFO][4567] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" HandleID="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.271 [INFO][4567] ipam_plugin.go 264: Auto assigning IP ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" HandleID="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-118", "pod":"csi-node-driver-sp2mz", "timestamp":"2024-08-05 22:14:24.239578006 +0000 UTC"}, Hostname:"ip-172-31-17-118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.271 [INFO][4567] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.271 [INFO][4567] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.272 [INFO][4567] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-118' Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.275 [INFO][4567] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.284 [INFO][4567] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.312 [INFO][4567] ipam.go 489: Trying affinity for 192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.317 [INFO][4567] ipam.go 155: Attempting to load block cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.322 [INFO][4567] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.322 [INFO][4567] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.192/26 handle="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.325 [INFO][4567] ipam.go 1685: Creating new handle: k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.341 [INFO][4567] ipam.go 1203: Writing block in order to claim IPs block=192.168.67.192/26 handle="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.350 [INFO][4567] ipam.go 1216: Successfully claimed IPs: [192.168.67.193/26] block=192.168.67.192/26 handle="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.350 [INFO][4567] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.193/26] handle="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" host="ip-172-31-17-118" Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.351 [INFO][4567] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:24.431256 containerd[1970]: 2024-08-05 22:14:24.352 [INFO][4567] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.67.193/26] IPv6=[] ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" HandleID="k8s-pod-network.6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.432805 containerd[1970]: 2024-08-05 22:14:24.361 [INFO][4556] k8s.go 386: Populated endpoint ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"", Pod:"csi-node-driver-sp2mz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali145ce3d4012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:24.432805 containerd[1970]: 2024-08-05 22:14:24.362 [INFO][4556] k8s.go 387: Calico CNI using IPs: [192.168.67.193/32] ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.432805 containerd[1970]: 2024-08-05 22:14:24.362 [INFO][4556] dataplane_linux.go 68: Setting the host side veth name to cali145ce3d4012 ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.432805 containerd[1970]: 2024-08-05 22:14:24.392 [INFO][4556] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.432805 containerd[1970]: 2024-08-05 22:14:24.394 [INFO][4556] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf", Pod:"csi-node-driver-sp2mz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali145ce3d4012", MAC:"2a:03:e0:e5:17:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:24.432805 containerd[1970]: 2024-08-05 22:14:24.415 [INFO][4556] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf" Namespace="calico-system" Pod="csi-node-driver-sp2mz" WorkloadEndpoint="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:24.602604 containerd[1970]: time="2024-08-05T22:14:24.601626420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:24.602604 containerd[1970]: time="2024-08-05T22:14:24.602373153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:24.602604 containerd[1970]: time="2024-08-05T22:14:24.602441568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:24.602604 containerd[1970]: time="2024-08-05T22:14:24.602466255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:24.679972 systemd[1]: Started cri-containerd-6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf.scope - libcontainer container 6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf. Aug 5 22:14:24.820830 containerd[1970]: time="2024-08-05T22:14:24.818967288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sp2mz,Uid:c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf\"" Aug 5 22:14:24.822099 containerd[1970]: time="2024-08-05T22:14:24.821953805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:14:25.293140 containerd[1970]: time="2024-08-05T22:14:25.293059736Z" level=info msg="StopPodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\"" Aug 5 22:14:25.384684 (udev-worker)[4581]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:25.421403 systemd-networkd[1807]: vxlan.calico: Link UP Aug 5 22:14:25.421416 systemd-networkd[1807]: vxlan.calico: Gained carrier Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.562 [INFO][4693] k8s.go 608: Cleaning up netns ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.563 [INFO][4693] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" iface="eth0" netns="/var/run/netns/cni-76693400-3c5c-4a43-770d-fd10788be8ff" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.573 [INFO][4693] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" iface="eth0" netns="/var/run/netns/cni-76693400-3c5c-4a43-770d-fd10788be8ff" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.576 [INFO][4693] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" iface="eth0" netns="/var/run/netns/cni-76693400-3c5c-4a43-770d-fd10788be8ff" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.576 [INFO][4693] k8s.go 615: Releasing IP address(es) ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.576 [INFO][4693] utils.go 188: Calico CNI releasing IP address ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.710 [INFO][4703] ipam_plugin.go 411: Releasing address using handleID ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.720 [INFO][4703] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.720 [INFO][4703] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.749 [WARNING][4703] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.749 [INFO][4703] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.757 [INFO][4703] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:25.769674 containerd[1970]: 2024-08-05 22:14:25.761 [INFO][4693] k8s.go 621: Teardown processing complete. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:25.777985 containerd[1970]: time="2024-08-05T22:14:25.770553666Z" level=info msg="TearDown network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" successfully" Aug 5 22:14:25.777985 containerd[1970]: time="2024-08-05T22:14:25.770598630Z" level=info msg="StopPodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" returns successfully" Aug 5 22:14:25.774925 systemd[1]: run-netns-cni\x2d76693400\x2d3c5c\x2d4a43\x2d770d\x2dfd10788be8ff.mount: Deactivated successfully. Aug 5 22:14:25.781022 containerd[1970]: time="2024-08-05T22:14:25.780856509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wm525,Uid:8bb515aa-fe24-4eff-89f5-c6780d1b60c8,Namespace:kube-system,Attempt:1,}" Aug 5 22:14:26.034804 systemd-networkd[1807]: cali145ce3d4012: Gained IPv6LL Aug 5 22:14:26.460524 systemd-networkd[1807]: cali879ff8e90c4: Link UP Aug 5 22:14:26.462134 systemd-networkd[1807]: cali879ff8e90c4: Gained carrier Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.125 [INFO][4740] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0 coredns-7db6d8ff4d- kube-system 8bb515aa-fe24-4eff-89f5-c6780d1b60c8 763 0 2024-08-05 22:13:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-118 coredns-7db6d8ff4d-wm525 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali879ff8e90c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.125 [INFO][4740] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.220 [INFO][4763] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" HandleID="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.244 [INFO][4763] ipam_plugin.go 264: Auto assigning IP ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" HandleID="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036aac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-118", "pod":"coredns-7db6d8ff4d-wm525", "timestamp":"2024-08-05 22:14:26.22091681 +0000 UTC"}, Hostname:"ip-172-31-17-118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.329 [INFO][4763] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.334 [INFO][4763] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.334 [INFO][4763] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-118' Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.341 [INFO][4763] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.361 [INFO][4763] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.386 [INFO][4763] ipam.go 489: Trying affinity for 192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.395 [INFO][4763] ipam.go 155: Attempting to load block cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.400 [INFO][4763] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.401 [INFO][4763] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.192/26 handle="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.405 [INFO][4763] ipam.go 1685: Creating new handle: k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.411 [INFO][4763] ipam.go 1203: Writing block in order to claim IPs block=192.168.67.192/26 handle="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.427 [INFO][4763] ipam.go 1216: Successfully claimed IPs: [192.168.67.194/26] block=192.168.67.192/26 handle="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.431 [INFO][4763] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.194/26] handle="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" host="ip-172-31-17-118" Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.431 [INFO][4763] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:26.502809 containerd[1970]: 2024-08-05 22:14:26.432 [INFO][4763] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.67.194/26] IPv6=[] ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" HandleID="k8s-pod-network.83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.504527 containerd[1970]: 2024-08-05 22:14:26.450 [INFO][4740] k8s.go 386: Populated endpoint ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8bb515aa-fe24-4eff-89f5-c6780d1b60c8", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"", Pod:"coredns-7db6d8ff4d-wm525", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali879ff8e90c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:26.504527 containerd[1970]: 2024-08-05 22:14:26.450 [INFO][4740] k8s.go 387: Calico CNI using IPs: [192.168.67.194/32] ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.504527 containerd[1970]: 2024-08-05 22:14:26.451 [INFO][4740] dataplane_linux.go 68: Setting the host side veth name to cali879ff8e90c4 ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.504527 containerd[1970]: 2024-08-05 22:14:26.463 [INFO][4740] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.504527 containerd[1970]: 2024-08-05 22:14:26.464 [INFO][4740] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8bb515aa-fe24-4eff-89f5-c6780d1b60c8", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb", Pod:"coredns-7db6d8ff4d-wm525", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali879ff8e90c4", MAC:"d6:92:3a:62:f4:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:26.504527 containerd[1970]: 2024-08-05 22:14:26.493 [INFO][4740] k8s.go 500: Wrote updated endpoint to datastore ContainerID="83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wm525" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:26.610536 containerd[1970]: time="2024-08-05T22:14:26.609009065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:26.610536 containerd[1970]: time="2024-08-05T22:14:26.609089959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:26.610536 containerd[1970]: time="2024-08-05T22:14:26.609135411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:26.610536 containerd[1970]: time="2024-08-05T22:14:26.609156057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:26.702014 systemd[1]: Started cri-containerd-83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb.scope - libcontainer container 83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb. Aug 5 22:14:26.792504 containerd[1970]: time="2024-08-05T22:14:26.792428458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:26.797943 containerd[1970]: time="2024-08-05T22:14:26.796771392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:14:26.799078 containerd[1970]: time="2024-08-05T22:14:26.799020061Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:26.805997 containerd[1970]: time="2024-08-05T22:14:26.805954019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:26.821611 containerd[1970]: time="2024-08-05T22:14:26.821562845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wm525,Uid:8bb515aa-fe24-4eff-89f5-c6780d1b60c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb\"" Aug 5 22:14:26.838623 containerd[1970]: time="2024-08-05T22:14:26.838571466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.016567749s" Aug 5 22:14:26.838917 containerd[1970]: time="2024-08-05T22:14:26.838889565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:14:26.844614 containerd[1970]: time="2024-08-05T22:14:26.844571653Z" level=info msg="CreateContainer within sandbox \"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:14:26.851123 containerd[1970]: time="2024-08-05T22:14:26.851076785Z" level=info msg="CreateContainer within sandbox \"83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:14:26.893102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104484637.mount: Deactivated successfully. Aug 5 22:14:26.914347 containerd[1970]: time="2024-08-05T22:14:26.914297982Z" level=info msg="CreateContainer within sandbox \"83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c44ed2cda4e9ad057800e18455b68a23d207320471794c2f9fba9825d00d52f7\"" Aug 5 22:14:26.917419 containerd[1970]: time="2024-08-05T22:14:26.916204596Z" level=info msg="StartContainer for \"c44ed2cda4e9ad057800e18455b68a23d207320471794c2f9fba9825d00d52f7\"" Aug 5 22:14:26.920588 containerd[1970]: time="2024-08-05T22:14:26.920547766Z" level=info msg="CreateContainer within sandbox \"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8c12754891ac3d0e8c58ca27998b14e8ddb274ef29c4837e69a94c319b92448f\"" Aug 5 22:14:26.925671 containerd[1970]: time="2024-08-05T22:14:26.924904479Z" level=info msg="StartContainer for \"8c12754891ac3d0e8c58ca27998b14e8ddb274ef29c4837e69a94c319b92448f\"" Aug 5 22:14:26.992109 systemd[1]: Started sshd@8-172.31.17.118:22-139.178.89.65:39632.service - OpenSSH per-connection server daemon (139.178.89.65:39632). Aug 5 22:14:27.036064 systemd[1]: Started cri-containerd-c44ed2cda4e9ad057800e18455b68a23d207320471794c2f9fba9825d00d52f7.scope - libcontainer container c44ed2cda4e9ad057800e18455b68a23d207320471794c2f9fba9825d00d52f7. Aug 5 22:14:27.088316 systemd[1]: Started cri-containerd-8c12754891ac3d0e8c58ca27998b14e8ddb274ef29c4837e69a94c319b92448f.scope - libcontainer container 8c12754891ac3d0e8c58ca27998b14e8ddb274ef29c4837e69a94c319b92448f. Aug 5 22:14:27.225387 containerd[1970]: time="2024-08-05T22:14:27.224264892Z" level=info msg="StartContainer for \"c44ed2cda4e9ad057800e18455b68a23d207320471794c2f9fba9825d00d52f7\" returns successfully" Aug 5 22:14:27.250752 systemd-networkd[1807]: vxlan.calico: Gained IPv6LL Aug 5 22:14:27.300198 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 39632 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:27.304340 containerd[1970]: time="2024-08-05T22:14:27.301965929Z" level=info msg="StopPodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\"" Aug 5 22:14:27.307612 containerd[1970]: time="2024-08-05T22:14:27.305182228Z" level=info msg="StopPodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\"" Aug 5 22:14:27.313258 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:27.335205 systemd-logind[1946]: New session 9 of user core. Aug 5 22:14:27.340237 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:14:27.390174 containerd[1970]: time="2024-08-05T22:14:27.390045137Z" level=info msg="StartContainer for \"8c12754891ac3d0e8c58ca27998b14e8ddb274ef29c4837e69a94c319b92448f\" returns successfully" Aug 5 22:14:27.397178 containerd[1970]: time="2024-08-05T22:14:27.397131973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.574 [INFO][4940] k8s.go 608: Cleaning up netns ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.575 [INFO][4940] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" iface="eth0" netns="/var/run/netns/cni-c4889d64-d665-c639-8006-dd325d9ddea1" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.575 [INFO][4940] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" iface="eth0" netns="/var/run/netns/cni-c4889d64-d665-c639-8006-dd325d9ddea1" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.576 [INFO][4940] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" iface="eth0" netns="/var/run/netns/cni-c4889d64-d665-c639-8006-dd325d9ddea1" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.576 [INFO][4940] k8s.go 615: Releasing IP address(es) ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.576 [INFO][4940] utils.go 188: Calico CNI releasing IP address ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.669 [INFO][4957] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.670 [INFO][4957] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.670 [INFO][4957] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.682 [WARNING][4957] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.682 [INFO][4957] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.688 [INFO][4957] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:27.698692 containerd[1970]: 2024-08-05 22:14:27.693 [INFO][4940] k8s.go 621: Teardown processing complete. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:27.701245 containerd[1970]: time="2024-08-05T22:14:27.700728771Z" level=info msg="TearDown network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" successfully" Aug 5 22:14:27.701245 containerd[1970]: time="2024-08-05T22:14:27.700874661Z" level=info msg="StopPodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" returns successfully" Aug 5 22:14:27.704074 containerd[1970]: time="2024-08-05T22:14:27.703184179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2zqvt,Uid:ec49b585-a6b6-451a-9d12-04277915267d,Namespace:kube-system,Attempt:1,}" Aug 5 22:14:27.899632 systemd[1]: run-netns-cni\x2dc4889d64\x2dd665\x2dc639\x2d8006\x2ddd325d9ddea1.mount: Deactivated successfully. Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.700 [INFO][4938] k8s.go 608: Cleaning up netns ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.700 [INFO][4938] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" iface="eth0" netns="/var/run/netns/cni-04c90257-c351-214c-22b7-12158bbb5cd6" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.701 [INFO][4938] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" iface="eth0" netns="/var/run/netns/cni-04c90257-c351-214c-22b7-12158bbb5cd6" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.703 [INFO][4938] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" iface="eth0" netns="/var/run/netns/cni-04c90257-c351-214c-22b7-12158bbb5cd6" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.703 [INFO][4938] k8s.go 615: Releasing IP address(es) ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.703 [INFO][4938] utils.go 188: Calico CNI releasing IP address ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.902 [INFO][4967] ipam_plugin.go 411: Releasing address using handleID ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.904 [INFO][4967] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:27.910 [INFO][4967] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:28.003 [WARNING][4967] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:28.003 [INFO][4967] ipam_plugin.go 439: Releasing address using workloadID ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:28.026 [INFO][4967] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:28.035237 containerd[1970]: 2024-08-05 22:14:28.031 [INFO][4938] k8s.go 621: Teardown processing complete. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:28.049215 containerd[1970]: time="2024-08-05T22:14:28.039715577Z" level=info msg="TearDown network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" successfully" Aug 5 22:14:28.049215 containerd[1970]: time="2024-08-05T22:14:28.039764482Z" level=info msg="StopPodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" returns successfully" Aug 5 22:14:28.049215 containerd[1970]: time="2024-08-05T22:14:28.047968279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b54bf7c66-tr2nl,Uid:64251e71-0ce8-4f60-9537-431748961741,Namespace:calico-system,Attempt:1,}" Aug 5 22:14:28.051338 systemd[1]: run-netns-cni\x2d04c90257\x2dc351\x2d214c\x2d22b7\x2d12158bbb5cd6.mount: Deactivated successfully. Aug 5 22:14:28.110282 sshd[4864]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:28.125116 systemd[1]: sshd@8-172.31.17.118:22-139.178.89.65:39632.service: Deactivated successfully. Aug 5 22:14:28.131101 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:14:28.141910 systemd-logind[1946]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:14:28.143495 systemd-networkd[1807]: cali879ff8e90c4: Gained IPv6LL Aug 5 22:14:28.148818 systemd-logind[1946]: Removed session 9. Aug 5 22:14:28.508687 systemd-networkd[1807]: calic88385956a9: Link UP Aug 5 22:14:28.511276 systemd-networkd[1807]: calic88385956a9: Gained carrier Aug 5 22:14:28.555052 kubelet[3384]: I0805 22:14:28.554971 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wm525" podStartSLOduration=45.554945626 podStartE2EDuration="45.554945626s" podCreationTimestamp="2024-08-05 22:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:14:28.165977918 +0000 UTC m=+58.171305306" watchObservedRunningTime="2024-08-05 22:14:28.554945626 +0000 UTC m=+58.560273013" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.018 [INFO][4977] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0 coredns-7db6d8ff4d- kube-system ec49b585-a6b6-451a-9d12-04277915267d 789 0 2024-08-05 22:13:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-118 coredns-7db6d8ff4d-2zqvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic88385956a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.018 [INFO][4977] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.319 [INFO][4995] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" HandleID="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.361 [INFO][4995] ipam_plugin.go 264: Auto assigning IP ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" HandleID="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00065ee10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-118", "pod":"coredns-7db6d8ff4d-2zqvt", "timestamp":"2024-08-05 22:14:28.319240406 +0000 UTC"}, Hostname:"ip-172-31-17-118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.361 [INFO][4995] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.361 [INFO][4995] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.361 [INFO][4995] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-118' Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.370 [INFO][4995] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.385 [INFO][4995] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.423 [INFO][4995] ipam.go 489: Trying affinity for 192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.435 [INFO][4995] ipam.go 155: Attempting to load block cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.445 [INFO][4995] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.447 [INFO][4995] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.192/26 handle="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.455 [INFO][4995] ipam.go 1685: Creating new handle: k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5 Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.473 [INFO][4995] ipam.go 1203: Writing block in order to claim IPs block=192.168.67.192/26 handle="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.489 [INFO][4995] ipam.go 1216: Successfully claimed IPs: [192.168.67.195/26] block=192.168.67.192/26 handle="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.490 [INFO][4995] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.195/26] handle="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" host="ip-172-31-17-118" Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.490 [INFO][4995] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:28.562154 containerd[1970]: 2024-08-05 22:14:28.490 [INFO][4995] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.67.195/26] IPv6=[] ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" HandleID="k8s-pod-network.f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.565330 containerd[1970]: 2024-08-05 22:14:28.494 [INFO][4977] k8s.go 386: Populated endpoint ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec49b585-a6b6-451a-9d12-04277915267d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"", Pod:"coredns-7db6d8ff4d-2zqvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic88385956a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:28.565330 containerd[1970]: 2024-08-05 22:14:28.495 [INFO][4977] k8s.go 387: Calico CNI using IPs: [192.168.67.195/32] ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.565330 containerd[1970]: 2024-08-05 22:14:28.496 [INFO][4977] dataplane_linux.go 68: Setting the host side veth name to calic88385956a9 ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.565330 containerd[1970]: 2024-08-05 22:14:28.511 [INFO][4977] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.565330 containerd[1970]: 2024-08-05 22:14:28.513 [INFO][4977] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec49b585-a6b6-451a-9d12-04277915267d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5", Pod:"coredns-7db6d8ff4d-2zqvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic88385956a9", MAC:"7a:b4:f0:04:60:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:28.565330 containerd[1970]: 2024-08-05 22:14:28.556 [INFO][4977] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2zqvt" WorkloadEndpoint="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:28.614060 systemd-networkd[1807]: cali792233efcde: Link UP Aug 5 22:14:28.616499 systemd-networkd[1807]: cali792233efcde: Gained carrier Aug 5 22:14:28.652724 containerd[1970]: time="2024-08-05T22:14:28.645969602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:28.652724 containerd[1970]: time="2024-08-05T22:14:28.647210342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:28.652724 containerd[1970]: time="2024-08-05T22:14:28.647250173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:28.652724 containerd[1970]: time="2024-08-05T22:14:28.647265568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.266 [INFO][4985] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0 calico-kube-controllers-b54bf7c66- calico-system 64251e71-0ce8-4f60-9537-431748961741 790 0 2024-08-05 22:13:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b54bf7c66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-118 calico-kube-controllers-b54bf7c66-tr2nl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali792233efcde [] []}} ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.266 [INFO][4985] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.385 [INFO][5005] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" HandleID="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.455 [INFO][5005] ipam_plugin.go 264: Auto assigning IP ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" HandleID="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360fb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-118", "pod":"calico-kube-controllers-b54bf7c66-tr2nl", "timestamp":"2024-08-05 22:14:28.38577375 +0000 UTC"}, Hostname:"ip-172-31-17-118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.455 [INFO][5005] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.490 [INFO][5005] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.490 [INFO][5005] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-118' Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.496 [INFO][5005] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.509 [INFO][5005] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.529 [INFO][5005] ipam.go 489: Trying affinity for 192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.539 [INFO][5005] ipam.go 155: Attempting to load block cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.560 [INFO][5005] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.560 [INFO][5005] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.192/26 handle="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.566 [INFO][5005] ipam.go 1685: Creating new handle: k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5 Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.582 [INFO][5005] ipam.go 1203: Writing block in order to claim IPs block=192.168.67.192/26 handle="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.596 [INFO][5005] ipam.go 1216: Successfully claimed IPs: [192.168.67.196/26] block=192.168.67.192/26 handle="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.596 [INFO][5005] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.196/26] handle="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" host="ip-172-31-17-118" Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.596 [INFO][5005] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:28.659079 containerd[1970]: 2024-08-05 22:14:28.596 [INFO][5005] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.67.196/26] IPv6=[] ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" HandleID="k8s-pod-network.25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.666798 containerd[1970]: 2024-08-05 22:14:28.603 [INFO][4985] k8s.go 386: Populated endpoint ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0", GenerateName:"calico-kube-controllers-b54bf7c66-", Namespace:"calico-system", SelfLink:"", UID:"64251e71-0ce8-4f60-9537-431748961741", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b54bf7c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"", Pod:"calico-kube-controllers-b54bf7c66-tr2nl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali792233efcde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:28.666798 containerd[1970]: 2024-08-05 22:14:28.603 [INFO][4985] k8s.go 387: Calico CNI using IPs: [192.168.67.196/32] ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.666798 containerd[1970]: 2024-08-05 22:14:28.603 [INFO][4985] dataplane_linux.go 68: Setting the host side veth name to cali792233efcde ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.666798 containerd[1970]: 2024-08-05 22:14:28.606 [INFO][4985] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.666798 containerd[1970]: 2024-08-05 22:14:28.606 [INFO][4985] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0", GenerateName:"calico-kube-controllers-b54bf7c66-", Namespace:"calico-system", SelfLink:"", UID:"64251e71-0ce8-4f60-9537-431748961741", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b54bf7c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5", Pod:"calico-kube-controllers-b54bf7c66-tr2nl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali792233efcde", MAC:"02:18:56:3b:9b:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:28.666798 containerd[1970]: 2024-08-05 22:14:28.652 [INFO][4985] k8s.go 500: Wrote updated endpoint to datastore ContainerID="25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5" Namespace="calico-system" Pod="calico-kube-controllers-b54bf7c66-tr2nl" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:28.729106 systemd[1]: Started cri-containerd-f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5.scope - libcontainer container f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5. Aug 5 22:14:28.814790 containerd[1970]: time="2024-08-05T22:14:28.812201508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:28.814790 containerd[1970]: time="2024-08-05T22:14:28.812283274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:28.814790 containerd[1970]: time="2024-08-05T22:14:28.812327449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:28.814790 containerd[1970]: time="2024-08-05T22:14:28.812356094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:28.894296 systemd[1]: run-containerd-runc-k8s.io-f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5-runc.sB7EOl.mount: Deactivated successfully. Aug 5 22:14:28.908904 systemd[1]: Started cri-containerd-25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5.scope - libcontainer container 25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5. Aug 5 22:14:28.967341 containerd[1970]: time="2024-08-05T22:14:28.967283804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2zqvt,Uid:ec49b585-a6b6-451a-9d12-04277915267d,Namespace:kube-system,Attempt:1,} returns sandbox id \"f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5\"" Aug 5 22:14:28.979551 containerd[1970]: time="2024-08-05T22:14:28.979500638Z" level=info msg="CreateContainer within sandbox \"f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:14:29.046694 containerd[1970]: time="2024-08-05T22:14:29.046368261Z" level=info msg="CreateContainer within sandbox \"f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abfa835678321ac14f7f38051cf6cc9e6bbd45bad4c063efb57187678bc313c0\"" Aug 5 22:14:29.047226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021721705.mount: Deactivated successfully. Aug 5 22:14:29.050829 containerd[1970]: time="2024-08-05T22:14:29.050421711Z" level=info msg="StartContainer for \"abfa835678321ac14f7f38051cf6cc9e6bbd45bad4c063efb57187678bc313c0\"" Aug 5 22:14:29.140026 systemd[1]: Started cri-containerd-abfa835678321ac14f7f38051cf6cc9e6bbd45bad4c063efb57187678bc313c0.scope - libcontainer container abfa835678321ac14f7f38051cf6cc9e6bbd45bad4c063efb57187678bc313c0. Aug 5 22:14:29.250391 containerd[1970]: time="2024-08-05T22:14:29.250193296Z" level=info msg="StartContainer for \"abfa835678321ac14f7f38051cf6cc9e6bbd45bad4c063efb57187678bc313c0\" returns successfully" Aug 5 22:14:29.456799 containerd[1970]: time="2024-08-05T22:14:29.456033875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b54bf7c66-tr2nl,Uid:64251e71-0ce8-4f60-9537-431748961741,Namespace:calico-system,Attempt:1,} returns sandbox id \"25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5\"" Aug 5 22:14:29.678760 systemd-networkd[1807]: calic88385956a9: Gained IPv6LL Aug 5 22:14:29.869788 containerd[1970]: time="2024-08-05T22:14:29.868565756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:29.871482 containerd[1970]: time="2024-08-05T22:14:29.871419229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:14:29.873526 containerd[1970]: time="2024-08-05T22:14:29.873476525Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:29.883545 containerd[1970]: time="2024-08-05T22:14:29.883467000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.486095104s" Aug 5 22:14:29.883545 containerd[1970]: time="2024-08-05T22:14:29.883513487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:14:29.883968 containerd[1970]: time="2024-08-05T22:14:29.883658628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:29.886498 containerd[1970]: time="2024-08-05T22:14:29.885915537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:14:29.888889 containerd[1970]: time="2024-08-05T22:14:29.888847334Z" level=info msg="CreateContainer within sandbox \"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:14:29.922439 containerd[1970]: time="2024-08-05T22:14:29.922316310Z" level=info msg="CreateContainer within sandbox \"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0cf9b82a720f177c67ee2b320750a58af37b4b055090ea7b65fe1d1703346e71\"" Aug 5 22:14:29.923689 containerd[1970]: time="2024-08-05T22:14:29.923216657Z" level=info msg="StartContainer for \"0cf9b82a720f177c67ee2b320750a58af37b4b055090ea7b65fe1d1703346e71\"" Aug 5 22:14:29.999825 systemd-networkd[1807]: cali792233efcde: Gained IPv6LL Aug 5 22:14:30.012540 systemd[1]: run-containerd-runc-k8s.io-0cf9b82a720f177c67ee2b320750a58af37b4b055090ea7b65fe1d1703346e71-runc.Z8hCNV.mount: Deactivated successfully. Aug 5 22:14:30.035991 systemd[1]: Started cri-containerd-0cf9b82a720f177c67ee2b320750a58af37b4b055090ea7b65fe1d1703346e71.scope - libcontainer container 0cf9b82a720f177c67ee2b320750a58af37b4b055090ea7b65fe1d1703346e71. Aug 5 22:14:30.048190 kubelet[3384]: I0805 22:14:30.047975 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2zqvt" podStartSLOduration=47.047950803 podStartE2EDuration="47.047950803s" podCreationTimestamp="2024-08-05 22:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:14:30.014898841 +0000 UTC m=+60.020226227" watchObservedRunningTime="2024-08-05 22:14:30.047950803 +0000 UTC m=+60.053278183" Aug 5 22:14:30.119387 containerd[1970]: time="2024-08-05T22:14:30.119341253Z" level=info msg="StartContainer for \"0cf9b82a720f177c67ee2b320750a58af37b4b055090ea7b65fe1d1703346e71\" returns successfully" Aug 5 22:14:30.301454 containerd[1970]: time="2024-08-05T22:14:30.301039204Z" level=info msg="StopPodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\"" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.393 [WARNING][5218] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec49b585-a6b6-451a-9d12-04277915267d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5", Pod:"coredns-7db6d8ff4d-2zqvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic88385956a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.393 [INFO][5218] k8s.go 608: Cleaning up netns ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.393 [INFO][5218] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" iface="eth0" netns="" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.399 [INFO][5218] k8s.go 615: Releasing IP address(es) ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.399 [INFO][5218] utils.go 188: Calico CNI releasing IP address ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.702 [INFO][5227] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.703 [INFO][5227] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.705 [INFO][5227] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.756 [WARNING][5227] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.756 [INFO][5227] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.788 [INFO][5227] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:30.801651 containerd[1970]: 2024-08-05 22:14:30.797 [INFO][5218] k8s.go 621: Teardown processing complete. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:30.801651 containerd[1970]: time="2024-08-05T22:14:30.801433224Z" level=info msg="TearDown network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" successfully" Aug 5 22:14:30.801651 containerd[1970]: time="2024-08-05T22:14:30.801529139Z" level=info msg="StopPodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" returns successfully" Aug 5 22:14:30.804626 containerd[1970]: time="2024-08-05T22:14:30.803416763Z" level=info msg="RemovePodSandbox for \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\"" Aug 5 22:14:30.808557 containerd[1970]: time="2024-08-05T22:14:30.807918825Z" level=info msg="Forcibly stopping sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\"" Aug 5 22:14:30.931788 kubelet[3384]: I0805 22:14:30.931747 3384 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:14:30.934685 kubelet[3384]: I0805 22:14:30.934632 3384 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:30.943 [WARNING][5246] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec49b585-a6b6-451a-9d12-04277915267d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"f7a90e24c37c0384166d8291cea4cf7f0891b48aa0fcdb60e8fd3ff2d7cfa9d5", Pod:"coredns-7db6d8ff4d-2zqvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic88385956a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:30.946 [INFO][5246] k8s.go 608: Cleaning up netns ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:30.946 [INFO][5246] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" iface="eth0" netns="" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:30.946 [INFO][5246] k8s.go 615: Releasing IP address(es) ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:30.946 [INFO][5246] utils.go 188: Calico CNI releasing IP address ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.170 [INFO][5253] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.173 [INFO][5253] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.173 [INFO][5253] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.185 [WARNING][5253] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.186 [INFO][5253] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" HandleID="k8s-pod-network.4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--2zqvt-eth0" Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.190 [INFO][5253] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:31.199604 containerd[1970]: 2024-08-05 22:14:31.193 [INFO][5246] k8s.go 621: Teardown processing complete. ContainerID="4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d" Aug 5 22:14:31.199604 containerd[1970]: time="2024-08-05T22:14:31.197790569Z" level=info msg="TearDown network for sandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" successfully" Aug 5 22:14:31.285235 containerd[1970]: time="2024-08-05T22:14:31.283259410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:31.285235 containerd[1970]: time="2024-08-05T22:14:31.283685601Z" level=info msg="RemovePodSandbox \"4b95b1e55cba6737343e84bf55d4c575cb961edc2ab7500eab7018669c13ad7d\" returns successfully" Aug 5 22:14:31.286779 containerd[1970]: time="2024-08-05T22:14:31.285809231Z" level=info msg="StopPodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\"" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.610 [WARNING][5284] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0", GenerateName:"calico-kube-controllers-b54bf7c66-", Namespace:"calico-system", SelfLink:"", UID:"64251e71-0ce8-4f60-9537-431748961741", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b54bf7c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5", Pod:"calico-kube-controllers-b54bf7c66-tr2nl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali792233efcde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.610 [INFO][5284] k8s.go 608: Cleaning up netns ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.610 [INFO][5284] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" iface="eth0" netns="" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.611 [INFO][5284] k8s.go 615: Releasing IP address(es) ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.611 [INFO][5284] utils.go 188: Calico CNI releasing IP address ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.753 [INFO][5294] ipam_plugin.go 411: Releasing address using handleID ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.756 [INFO][5294] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.756 [INFO][5294] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.787 [WARNING][5294] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.788 [INFO][5294] ipam_plugin.go 439: Releasing address using workloadID ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.797 [INFO][5294] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:31.813823 containerd[1970]: 2024-08-05 22:14:31.804 [INFO][5284] k8s.go 621: Teardown processing complete. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:31.816738 containerd[1970]: time="2024-08-05T22:14:31.813847269Z" level=info msg="TearDown network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" successfully" Aug 5 22:14:31.816738 containerd[1970]: time="2024-08-05T22:14:31.813876942Z" level=info msg="StopPodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" returns successfully" Aug 5 22:14:31.816738 containerd[1970]: time="2024-08-05T22:14:31.814458580Z" level=info msg="RemovePodSandbox for \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\"" Aug 5 22:14:31.816738 containerd[1970]: time="2024-08-05T22:14:31.814496178Z" level=info msg="Forcibly stopping sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\"" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:31.924 [WARNING][5313] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0", GenerateName:"calico-kube-controllers-b54bf7c66-", Namespace:"calico-system", SelfLink:"", UID:"64251e71-0ce8-4f60-9537-431748961741", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b54bf7c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5", Pod:"calico-kube-controllers-b54bf7c66-tr2nl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali792233efcde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:31.925 [INFO][5313] k8s.go 608: Cleaning up netns ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:31.925 [INFO][5313] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" iface="eth0" netns="" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:31.925 [INFO][5313] k8s.go 615: Releasing IP address(es) ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:31.925 [INFO][5313] utils.go 188: Calico CNI releasing IP address ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.075 [INFO][5319] ipam_plugin.go 411: Releasing address using handleID ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.077 [INFO][5319] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.077 [INFO][5319] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.148 [WARNING][5319] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.148 [INFO][5319] ipam_plugin.go 439: Releasing address using workloadID ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" HandleID="k8s-pod-network.101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Workload="ip--172--31--17--118-k8s-calico--kube--controllers--b54bf7c66--tr2nl-eth0" Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.155 [INFO][5319] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:32.176051 containerd[1970]: 2024-08-05 22:14:32.161 [INFO][5313] k8s.go 621: Teardown processing complete. ContainerID="101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad" Aug 5 22:14:32.176051 containerd[1970]: time="2024-08-05T22:14:32.175096708Z" level=info msg="TearDown network for sandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" successfully" Aug 5 22:14:32.202449 containerd[1970]: time="2024-08-05T22:14:32.202397785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:32.203231 containerd[1970]: time="2024-08-05T22:14:32.202495689Z" level=info msg="RemovePodSandbox \"101445f2f12a28b064693e157dfb1ef92cc93bf36bafffad8650e215cce6bcad\" returns successfully" Aug 5 22:14:32.208795 containerd[1970]: time="2024-08-05T22:14:32.207943438Z" level=info msg="StopPodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\"" Aug 5 22:14:32.531942 ntpd[1938]: Listen normally on 7 vxlan.calico 192.168.67.192:123 Aug 5 22:14:32.535185 ntpd[1938]: 5 Aug 22:14:32 ntpd[1938]: Listen normally on 7 vxlan.calico 192.168.67.192:123 Aug 5 22:14:32.535185 ntpd[1938]: 5 Aug 22:14:32 ntpd[1938]: Listen normally on 8 cali145ce3d4012 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 22:14:32.535185 ntpd[1938]: 5 Aug 22:14:32 ntpd[1938]: Listen normally on 9 vxlan.calico [fe80::6495:19ff:fecc:5318%5]:123 Aug 5 22:14:32.535185 ntpd[1938]: 5 Aug 22:14:32 ntpd[1938]: Listen normally on 10 cali879ff8e90c4 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:14:32.535185 ntpd[1938]: 5 Aug 22:14:32 ntpd[1938]: Listen normally on 11 calic88385956a9 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:14:32.535185 ntpd[1938]: 5 Aug 22:14:32 ntpd[1938]: Listen normally on 12 cali792233efcde [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:14:32.532060 ntpd[1938]: Listen normally on 8 cali145ce3d4012 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 22:14:32.532122 ntpd[1938]: Listen normally on 9 vxlan.calico [fe80::6495:19ff:fecc:5318%5]:123 Aug 5 22:14:32.532162 ntpd[1938]: Listen normally on 10 cali879ff8e90c4 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:14:32.532201 ntpd[1938]: Listen normally on 11 calic88385956a9 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:14:32.532246 ntpd[1938]: Listen normally on 12 cali792233efcde [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.410 [WARNING][5341] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf", Pod:"csi-node-driver-sp2mz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali145ce3d4012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.410 [INFO][5341] k8s.go 608: Cleaning up netns ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.410 [INFO][5341] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" iface="eth0" netns="" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.410 [INFO][5341] k8s.go 615: Releasing IP address(es) ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.410 [INFO][5341] utils.go 188: Calico CNI releasing IP address ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.547 [INFO][5347] ipam_plugin.go 411: Releasing address using handleID ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.548 [INFO][5347] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.548 [INFO][5347] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.584 [WARNING][5347] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.584 [INFO][5347] ipam_plugin.go 439: Releasing address using workloadID ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.591 [INFO][5347] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:32.603694 containerd[1970]: 2024-08-05 22:14:32.599 [INFO][5341] k8s.go 621: Teardown processing complete. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.605041 containerd[1970]: time="2024-08-05T22:14:32.603626130Z" level=info msg="TearDown network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" successfully" Aug 5 22:14:32.605041 containerd[1970]: time="2024-08-05T22:14:32.604599595Z" level=info msg="StopPodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" returns successfully" Aug 5 22:14:32.606086 containerd[1970]: time="2024-08-05T22:14:32.606052481Z" level=info msg="RemovePodSandbox for \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\"" Aug 5 22:14:32.606188 containerd[1970]: time="2024-08-05T22:14:32.606090843Z" level=info msg="Forcibly stopping sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\"" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.751 [WARNING][5365] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c04d2f71-56dd-4ddf-be4f-eac15a3c0c8c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"6bdb1e94537aa7e741973ec0e78a50a9b6e9973bae05d4d2a314fd987d7c29cf", Pod:"csi-node-driver-sp2mz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali145ce3d4012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.753 [INFO][5365] k8s.go 608: Cleaning up netns ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.753 [INFO][5365] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" iface="eth0" netns="" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.753 [INFO][5365] k8s.go 615: Releasing IP address(es) ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.753 [INFO][5365] utils.go 188: Calico CNI releasing IP address ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.881 [INFO][5371] ipam_plugin.go 411: Releasing address using handleID ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.881 [INFO][5371] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.882 [INFO][5371] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.898 [WARNING][5371] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.898 [INFO][5371] ipam_plugin.go 439: Releasing address using workloadID ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" HandleID="k8s-pod-network.42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Workload="ip--172--31--17--118-k8s-csi--node--driver--sp2mz-eth0" Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.905 [INFO][5371] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:32.913158 containerd[1970]: 2024-08-05 22:14:32.907 [INFO][5365] k8s.go 621: Teardown processing complete. ContainerID="42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d" Aug 5 22:14:32.913158 containerd[1970]: time="2024-08-05T22:14:32.911682211Z" level=info msg="TearDown network for sandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" successfully" Aug 5 22:14:32.917744 containerd[1970]: time="2024-08-05T22:14:32.917554488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:32.918363 containerd[1970]: time="2024-08-05T22:14:32.917632050Z" level=info msg="RemovePodSandbox \"42c628630dd99e1f01c1fc357db7ff993e8d3e4df7db0f84f53b30a78da1238d\" returns successfully" Aug 5 22:14:32.920030 containerd[1970]: time="2024-08-05T22:14:32.919996784Z" level=info msg="StopPodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\"" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.011 [WARNING][5389] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8bb515aa-fe24-4eff-89f5-c6780d1b60c8", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb", Pod:"coredns-7db6d8ff4d-wm525", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali879ff8e90c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.011 [INFO][5389] k8s.go 608: Cleaning up netns ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.011 [INFO][5389] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" iface="eth0" netns="" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.011 [INFO][5389] k8s.go 615: Releasing IP address(es) ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.011 [INFO][5389] utils.go 188: Calico CNI releasing IP address ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.078 [INFO][5396] ipam_plugin.go 411: Releasing address using handleID ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.081 [INFO][5396] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.081 [INFO][5396] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.098 [WARNING][5396] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.098 [INFO][5396] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.102 [INFO][5396] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:33.114966 containerd[1970]: 2024-08-05 22:14:33.104 [INFO][5389] k8s.go 621: Teardown processing complete. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.116768 containerd[1970]: time="2024-08-05T22:14:33.115944183Z" level=info msg="TearDown network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" successfully" Aug 5 22:14:33.116768 containerd[1970]: time="2024-08-05T22:14:33.115984389Z" level=info msg="StopPodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" returns successfully" Aug 5 22:14:33.118558 containerd[1970]: time="2024-08-05T22:14:33.118405602Z" level=info msg="RemovePodSandbox for \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\"" Aug 5 22:14:33.118746 containerd[1970]: time="2024-08-05T22:14:33.118684181Z" level=info msg="Forcibly stopping sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\"" Aug 5 22:14:33.148945 systemd[1]: Started sshd@9-172.31.17.118:22-139.178.89.65:49520.service - OpenSSH per-connection server daemon (139.178.89.65:49520). Aug 5 22:14:33.388470 sshd[5411]: Accepted publickey for core from 139.178.89.65 port 49520 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:33.393132 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:33.403601 systemd-logind[1946]: New session 10 of user core. Aug 5 22:14:33.410906 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.348 [WARNING][5416] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8bb515aa-fe24-4eff-89f5-c6780d1b60c8", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"83dd6e58442453a08cd9b121e0a331160fd45fb4c6c8af3189b8a21962386dcb", Pod:"coredns-7db6d8ff4d-wm525", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali879ff8e90c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.348 [INFO][5416] k8s.go 608: Cleaning up netns ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.348 [INFO][5416] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" iface="eth0" netns="" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.348 [INFO][5416] k8s.go 615: Releasing IP address(es) ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.348 [INFO][5416] utils.go 188: Calico CNI releasing IP address ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.432 [INFO][5423] ipam_plugin.go 411: Releasing address using handleID ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.433 [INFO][5423] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.434 [INFO][5423] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.456 [WARNING][5423] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.456 [INFO][5423] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" HandleID="k8s-pod-network.a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Workload="ip--172--31--17--118-k8s-coredns--7db6d8ff4d--wm525-eth0" Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.462 [INFO][5423] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:33.478744 containerd[1970]: 2024-08-05 22:14:33.470 [INFO][5416] k8s.go 621: Teardown processing complete. ContainerID="a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde" Aug 5 22:14:33.479894 containerd[1970]: time="2024-08-05T22:14:33.479850730Z" level=info msg="TearDown network for sandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" successfully" Aug 5 22:14:33.488186 containerd[1970]: time="2024-08-05T22:14:33.487972245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:33.488186 containerd[1970]: time="2024-08-05T22:14:33.488064065Z" level=info msg="RemovePodSandbox \"a55f7fe5f7adc31e3b41295b2a4cd23ff9d37cfa410b8b8348724d500499dcde\" returns successfully" Aug 5 22:14:34.073565 sshd[5411]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:34.080312 systemd-logind[1946]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:14:34.081750 systemd[1]: sshd@9-172.31.17.118:22-139.178.89.65:49520.service: Deactivated successfully. Aug 5 22:14:34.087484 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:14:34.089928 systemd-logind[1946]: Removed session 10. Aug 5 22:14:34.150144 containerd[1970]: time="2024-08-05T22:14:34.150092615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:34.152190 containerd[1970]: time="2024-08-05T22:14:34.152113243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:14:34.155273 containerd[1970]: time="2024-08-05T22:14:34.155209372Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:34.159945 containerd[1970]: time="2024-08-05T22:14:34.159284775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:34.161389 containerd[1970]: time="2024-08-05T22:14:34.161227394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.275266248s" Aug 5 22:14:34.161389 containerd[1970]: time="2024-08-05T22:14:34.161281624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:14:34.219666 containerd[1970]: time="2024-08-05T22:14:34.216878925Z" level=info msg="CreateContainer within sandbox \"25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:14:34.298704 containerd[1970]: time="2024-08-05T22:14:34.298655523Z" level=info msg="CreateContainer within sandbox \"25a1ddb8d0aa75cb468483a28f297951fbb1e21e06663f3ec8617017fe873be5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d\"" Aug 5 22:14:34.299516 containerd[1970]: time="2024-08-05T22:14:34.299294180Z" level=info msg="StartContainer for \"7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d\"" Aug 5 22:14:34.448256 systemd[1]: Started cri-containerd-7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d.scope - libcontainer container 7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d. Aug 5 22:14:34.575959 containerd[1970]: time="2024-08-05T22:14:34.575902568Z" level=info msg="StartContainer for \"7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d\" returns successfully" Aug 5 22:14:35.069178 kubelet[3384]: I0805 22:14:35.069109 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sp2mz" podStartSLOduration=39.004061782 podStartE2EDuration="44.068652284s" podCreationTimestamp="2024-08-05 22:13:51 +0000 UTC" firstStartedPulling="2024-08-05 22:14:24.820931316 +0000 UTC m=+54.826258687" lastFinishedPulling="2024-08-05 22:14:29.885521823 +0000 UTC m=+59.890849189" observedRunningTime="2024-08-05 22:14:31.082750501 +0000 UTC m=+61.088077890" watchObservedRunningTime="2024-08-05 22:14:35.068652284 +0000 UTC m=+65.073979668" Aug 5 22:14:35.069851 kubelet[3384]: I0805 22:14:35.069683 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b54bf7c66-tr2nl" podStartSLOduration=39.367306079 podStartE2EDuration="44.069635216s" podCreationTimestamp="2024-08-05 22:13:51 +0000 UTC" firstStartedPulling="2024-08-05 22:14:29.461373586 +0000 UTC m=+59.466700959" lastFinishedPulling="2024-08-05 22:14:34.163702721 +0000 UTC m=+64.169030096" observedRunningTime="2024-08-05 22:14:35.066496958 +0000 UTC m=+65.071824345" watchObservedRunningTime="2024-08-05 22:14:35.069635216 +0000 UTC m=+65.074962603" Aug 5 22:14:35.204601 systemd[1]: run-containerd-runc-k8s.io-7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d-runc.CDEUPJ.mount: Deactivated successfully. Aug 5 22:14:39.111476 systemd[1]: Started sshd@10-172.31.17.118:22-139.178.89.65:49530.service - OpenSSH per-connection server daemon (139.178.89.65:49530). Aug 5 22:14:39.333781 sshd[5533]: Accepted publickey for core from 139.178.89.65 port 49530 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:39.337006 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:39.352006 systemd-logind[1946]: New session 11 of user core. Aug 5 22:14:39.357155 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:14:39.991680 sshd[5533]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:39.996488 systemd[1]: sshd@10-172.31.17.118:22-139.178.89.65:49530.service: Deactivated successfully. Aug 5 22:14:40.000350 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:14:40.001508 systemd-logind[1946]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:14:40.002990 systemd-logind[1946]: Removed session 11. Aug 5 22:14:40.027039 systemd[1]: Started sshd@11-172.31.17.118:22-139.178.89.65:49536.service - OpenSSH per-connection server daemon (139.178.89.65:49536). Aug 5 22:14:40.209323 sshd[5550]: Accepted publickey for core from 139.178.89.65 port 49536 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:40.214561 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:40.237344 systemd-logind[1946]: New session 12 of user core. Aug 5 22:14:40.243895 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:14:40.656269 sshd[5550]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:40.664923 systemd-logind[1946]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:14:40.671889 systemd[1]: sshd@11-172.31.17.118:22-139.178.89.65:49536.service: Deactivated successfully. Aug 5 22:14:40.679041 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:14:40.703959 systemd-logind[1946]: Removed session 12. Aug 5 22:14:40.711603 systemd[1]: Started sshd@12-172.31.17.118:22-139.178.89.65:60778.service - OpenSSH per-connection server daemon (139.178.89.65:60778). Aug 5 22:14:40.907378 sshd[5561]: Accepted publickey for core from 139.178.89.65 port 60778 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:40.911052 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:40.934190 systemd-logind[1946]: New session 13 of user core. Aug 5 22:14:40.944763 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:14:41.555513 sshd[5561]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:41.560434 systemd[1]: sshd@12-172.31.17.118:22-139.178.89.65:60778.service: Deactivated successfully. Aug 5 22:14:41.563867 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:14:41.566201 systemd-logind[1946]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:14:41.575025 systemd-logind[1946]: Removed session 13. Aug 5 22:14:46.590240 systemd[1]: Started sshd@13-172.31.17.118:22-139.178.89.65:60784.service - OpenSSH per-connection server daemon (139.178.89.65:60784). Aug 5 22:14:46.829950 sshd[5611]: Accepted publickey for core from 139.178.89.65 port 60784 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:46.832346 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:46.838634 systemd-logind[1946]: New session 14 of user core. Aug 5 22:14:46.845902 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:14:47.222190 sshd[5611]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:47.228464 systemd[1]: sshd@13-172.31.17.118:22-139.178.89.65:60784.service: Deactivated successfully. Aug 5 22:14:47.231618 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:14:47.234420 systemd-logind[1946]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:14:47.243057 systemd-logind[1946]: Removed session 14. Aug 5 22:14:49.202754 systemd[1]: run-containerd-runc-k8s.io-7487806f7be4790e8d7bef8b4acee9b5c94d5153cb7bb18ebfb62ee71731692d-runc.1YvuCc.mount: Deactivated successfully. Aug 5 22:14:52.292915 systemd[1]: Started sshd@14-172.31.17.118:22-139.178.89.65:42688.service - OpenSSH per-connection server daemon (139.178.89.65:42688). Aug 5 22:14:52.509997 sshd[5645]: Accepted publickey for core from 139.178.89.65 port 42688 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:52.514245 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:52.524073 systemd-logind[1946]: New session 15 of user core. Aug 5 22:14:52.531865 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:14:52.802515 sshd[5645]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:52.807952 systemd-logind[1946]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:14:52.809136 systemd[1]: sshd@14-172.31.17.118:22-139.178.89.65:42688.service: Deactivated successfully. Aug 5 22:14:52.811667 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:14:52.813075 systemd-logind[1946]: Removed session 15. Aug 5 22:14:55.322383 kubelet[3384]: I0805 22:14:55.322257 3384 topology_manager.go:215] "Topology Admit Handler" podUID="2941e32b-7b53-4178-9d75-6cdbbc4257b3" podNamespace="calico-apiserver" podName="calico-apiserver-6f55ffc55c-m4qkr" Aug 5 22:14:55.354732 systemd[1]: Created slice kubepods-besteffort-pod2941e32b_7b53_4178_9d75_6cdbbc4257b3.slice - libcontainer container kubepods-besteffort-pod2941e32b_7b53_4178_9d75_6cdbbc4257b3.slice. Aug 5 22:14:55.446438 kubelet[3384]: I0805 22:14:55.446392 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2941e32b-7b53-4178-9d75-6cdbbc4257b3-calico-apiserver-certs\") pod \"calico-apiserver-6f55ffc55c-m4qkr\" (UID: \"2941e32b-7b53-4178-9d75-6cdbbc4257b3\") " pod="calico-apiserver/calico-apiserver-6f55ffc55c-m4qkr" Aug 5 22:14:55.446613 kubelet[3384]: I0805 22:14:55.446449 3384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85wr\" (UniqueName: \"kubernetes.io/projected/2941e32b-7b53-4178-9d75-6cdbbc4257b3-kube-api-access-n85wr\") pod \"calico-apiserver-6f55ffc55c-m4qkr\" (UID: \"2941e32b-7b53-4178-9d75-6cdbbc4257b3\") " pod="calico-apiserver/calico-apiserver-6f55ffc55c-m4qkr" Aug 5 22:14:55.575758 kubelet[3384]: E0805 22:14:55.553433 3384 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:14:55.610684 kubelet[3384]: E0805 22:14:55.610605 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2941e32b-7b53-4178-9d75-6cdbbc4257b3-calico-apiserver-certs podName:2941e32b-7b53-4178-9d75-6cdbbc4257b3 nodeName:}" failed. No retries permitted until 2024-08-05 22:14:56.093518476 +0000 UTC m=+86.098845853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2941e32b-7b53-4178-9d75-6cdbbc4257b3-calico-apiserver-certs") pod "calico-apiserver-6f55ffc55c-m4qkr" (UID: "2941e32b-7b53-4178-9d75-6cdbbc4257b3") : secret "calico-apiserver-certs" not found Aug 5 22:14:56.263893 containerd[1970]: time="2024-08-05T22:14:56.263854820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f55ffc55c-m4qkr,Uid:2941e32b-7b53-4178-9d75-6cdbbc4257b3,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:14:56.655945 systemd-networkd[1807]: cali84313b51d42: Link UP Aug 5 22:14:56.666448 (udev-worker)[5686]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:56.684053 systemd-networkd[1807]: cali84313b51d42: Gained carrier Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.353 [INFO][5664] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0 calico-apiserver-6f55ffc55c- calico-apiserver 2941e32b-7b53-4178-9d75-6cdbbc4257b3 1016 0 2024-08-05 22:14:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f55ffc55c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-118 calico-apiserver-6f55ffc55c-m4qkr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali84313b51d42 [] []}} ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.353 [INFO][5664] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.406 [INFO][5675] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" HandleID="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Workload="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.417 [INFO][5675] ipam_plugin.go 264: Auto assigning IP ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" HandleID="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Workload="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002900b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-118", "pod":"calico-apiserver-6f55ffc55c-m4qkr", "timestamp":"2024-08-05 22:14:56.406590958 +0000 UTC"}, Hostname:"ip-172-31-17-118", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.417 [INFO][5675] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.417 [INFO][5675] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.417 [INFO][5675] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-118' Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.422 [INFO][5675] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.483 [INFO][5675] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.516 [INFO][5675] ipam.go 489: Trying affinity for 192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.528 [INFO][5675] ipam.go 155: Attempting to load block cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.539 [INFO][5675] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.192/26 host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.539 [INFO][5675] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.192/26 handle="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.556 [INFO][5675] ipam.go 1685: Creating new handle: k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457 Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.588 [INFO][5675] ipam.go 1203: Writing block in order to claim IPs block=192.168.67.192/26 handle="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.640 [INFO][5675] ipam.go 1216: Successfully claimed IPs: [192.168.67.197/26] block=192.168.67.192/26 handle="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.640 [INFO][5675] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.197/26] handle="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" host="ip-172-31-17-118" Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.640 [INFO][5675] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:56.718714 containerd[1970]: 2024-08-05 22:14:56.640 [INFO][5675] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.67.197/26] IPv6=[] ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" HandleID="k8s-pod-network.d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Workload="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.725051 containerd[1970]: 2024-08-05 22:14:56.647 [INFO][5664] k8s.go 386: Populated endpoint ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0", GenerateName:"calico-apiserver-6f55ffc55c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2941e32b-7b53-4178-9d75-6cdbbc4257b3", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f55ffc55c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"", Pod:"calico-apiserver-6f55ffc55c-m4qkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84313b51d42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:56.725051 containerd[1970]: 2024-08-05 22:14:56.647 [INFO][5664] k8s.go 387: Calico CNI using IPs: [192.168.67.197/32] ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.725051 containerd[1970]: 2024-08-05 22:14:56.647 [INFO][5664] dataplane_linux.go 68: Setting the host side veth name to cali84313b51d42 ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.725051 containerd[1970]: 2024-08-05 22:14:56.660 [INFO][5664] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.725051 containerd[1970]: 2024-08-05 22:14:56.661 [INFO][5664] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0", GenerateName:"calico-apiserver-6f55ffc55c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2941e32b-7b53-4178-9d75-6cdbbc4257b3", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f55ffc55c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-118", ContainerID:"d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457", Pod:"calico-apiserver-6f55ffc55c-m4qkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84313b51d42", MAC:"36:58:4a:f7:18:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:56.725051 containerd[1970]: 2024-08-05 22:14:56.709 [INFO][5664] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457" Namespace="calico-apiserver" Pod="calico-apiserver-6f55ffc55c-m4qkr" WorkloadEndpoint="ip--172--31--17--118-k8s-calico--apiserver--6f55ffc55c--m4qkr-eth0" Aug 5 22:14:56.842839 containerd[1970]: time="2024-08-05T22:14:56.841849717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:56.842839 containerd[1970]: time="2024-08-05T22:14:56.841928701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:56.842839 containerd[1970]: time="2024-08-05T22:14:56.841957757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:56.842839 containerd[1970]: time="2024-08-05T22:14:56.842471965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:56.919186 systemd[1]: Started cri-containerd-d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457.scope - libcontainer container d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457. Aug 5 22:14:57.060923 containerd[1970]: time="2024-08-05T22:14:57.060857203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f55ffc55c-m4qkr,Uid:2941e32b-7b53-4178-9d75-6cdbbc4257b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457\"" Aug 5 22:14:57.064702 containerd[1970]: time="2024-08-05T22:14:57.064245993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:14:57.848860 systemd[1]: Started sshd@15-172.31.17.118:22-139.178.89.65:42702.service - OpenSSH per-connection server daemon (139.178.89.65:42702). Aug 5 22:14:57.992482 systemd-networkd[1807]: cali84313b51d42: Gained IPv6LL Aug 5 22:14:58.199792 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 42702 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:58.203749 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:58.216937 systemd-logind[1946]: New session 16 of user core. Aug 5 22:14:58.225214 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:14:59.424204 sshd[5743]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:59.460029 systemd[1]: sshd@15-172.31.17.118:22-139.178.89.65:42702.service: Deactivated successfully. Aug 5 22:14:59.460420 systemd-logind[1946]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:14:59.475224 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:14:59.478167 systemd-logind[1946]: Removed session 16. Aug 5 22:15:00.530470 ntpd[1938]: Listen normally on 13 cali84313b51d42 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:15:00.531083 ntpd[1938]: 5 Aug 22:15:00 ntpd[1938]: Listen normally on 13 cali84313b51d42 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:15:04.116832 containerd[1970]: time="2024-08-05T22:15:04.116766245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:15:04.128669 containerd[1970]: time="2024-08-05T22:15:04.127617253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 7.06330184s" Aug 5 22:15:04.128669 containerd[1970]: time="2024-08-05T22:15:04.128504584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:15:04.139268 containerd[1970]: time="2024-08-05T22:15:04.139203688Z" level=info msg="CreateContainer within sandbox \"d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:15:04.156747 containerd[1970]: time="2024-08-05T22:15:04.156556111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:15:04.158624 containerd[1970]: time="2024-08-05T22:15:04.158407508Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:15:04.161673 containerd[1970]: time="2024-08-05T22:15:04.159406489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:15:04.187762 containerd[1970]: time="2024-08-05T22:15:04.187706367Z" level=info msg="CreateContainer within sandbox \"d9d8a817485fb4814f1922d3cc889a3f6d03d6bf0c06976a1010d5036038c457\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"54b85f6762d374eed75a312174829d77c29b2e6ff4efbe237090bccac8337e6b\"" Aug 5 22:15:04.189763 containerd[1970]: time="2024-08-05T22:15:04.188754797Z" level=info msg="StartContainer for \"54b85f6762d374eed75a312174829d77c29b2e6ff4efbe237090bccac8337e6b\"" Aug 5 22:15:04.331671 systemd[1]: Started cri-containerd-54b85f6762d374eed75a312174829d77c29b2e6ff4efbe237090bccac8337e6b.scope - libcontainer container 54b85f6762d374eed75a312174829d77c29b2e6ff4efbe237090bccac8337e6b. Aug 5 22:15:04.465931 containerd[1970]: time="2024-08-05T22:15:04.464242767Z" level=info msg="StartContainer for \"54b85f6762d374eed75a312174829d77c29b2e6ff4efbe237090bccac8337e6b\" returns successfully" Aug 5 22:15:04.468495 systemd[1]: Started sshd@16-172.31.17.118:22-139.178.89.65:37124.service - OpenSSH per-connection server daemon (139.178.89.65:37124). Aug 5 22:15:04.756721 sshd[5796]: Accepted publickey for core from 139.178.89.65 port 37124 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:04.763585 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:04.784122 systemd-logind[1946]: New session 17 of user core. Aug 5 22:15:04.791062 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:15:05.322581 kubelet[3384]: I0805 22:15:05.322505 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f55ffc55c-m4qkr" podStartSLOduration=3.252666444 podStartE2EDuration="10.322475537s" podCreationTimestamp="2024-08-05 22:14:55 +0000 UTC" firstStartedPulling="2024-08-05 22:14:57.062868855 +0000 UTC m=+87.068196221" lastFinishedPulling="2024-08-05 22:15:04.132677937 +0000 UTC m=+94.138005314" observedRunningTime="2024-08-05 22:15:05.315988163 +0000 UTC m=+95.321315560" watchObservedRunningTime="2024-08-05 22:15:05.322475537 +0000 UTC m=+95.327802923" Aug 5 22:15:05.613919 sshd[5796]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:05.627862 systemd[1]: sshd@16-172.31.17.118:22-139.178.89.65:37124.service: Deactivated successfully. Aug 5 22:15:05.637012 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:15:05.646027 systemd-logind[1946]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:15:05.695162 systemd[1]: Started sshd@17-172.31.17.118:22-139.178.89.65:37126.service - OpenSSH per-connection server daemon (139.178.89.65:37126). Aug 5 22:15:05.699600 systemd-logind[1946]: Removed session 17. Aug 5 22:15:05.866707 systemd[1]: run-containerd-runc-k8s.io-5fbd2029b85f86af81ed82666947e072dfb5c52e61a403e471d56dfa7587c05c-runc.ykTfiH.mount: Deactivated successfully. Aug 5 22:15:05.996361 sshd[5818]: Accepted publickey for core from 139.178.89.65 port 37126 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:05.999493 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:06.011193 systemd-logind[1946]: New session 18 of user core. Aug 5 22:15:06.019897 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:15:07.001569 sshd[5818]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:07.015154 systemd-logind[1946]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:15:07.016966 systemd[1]: sshd@17-172.31.17.118:22-139.178.89.65:37126.service: Deactivated successfully. Aug 5 22:15:07.019525 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:15:07.041546 systemd-logind[1946]: Removed session 18. Aug 5 22:15:07.049418 systemd[1]: Started sshd@18-172.31.17.118:22-139.178.89.65:37140.service - OpenSSH per-connection server daemon (139.178.89.65:37140). Aug 5 22:15:07.260268 sshd[5865]: Accepted publickey for core from 139.178.89.65 port 37140 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:07.268004 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:07.277167 systemd-logind[1946]: New session 19 of user core. Aug 5 22:15:07.285878 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:15:11.240538 sshd[5865]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:11.247634 systemd[1]: sshd@18-172.31.17.118:22-139.178.89.65:37140.service: Deactivated successfully. Aug 5 22:15:11.253473 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:15:11.268714 systemd-logind[1946]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:15:11.288486 systemd[1]: Started sshd@19-172.31.17.118:22-139.178.89.65:46708.service - OpenSSH per-connection server daemon (139.178.89.65:46708). Aug 5 22:15:11.291542 systemd-logind[1946]: Removed session 19. Aug 5 22:15:11.501182 sshd[5903]: Accepted publickey for core from 139.178.89.65 port 46708 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:11.503060 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:11.516084 systemd-logind[1946]: New session 20 of user core. Aug 5 22:15:11.536767 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:15:13.085065 sshd[5903]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:13.096800 systemd[1]: sshd@19-172.31.17.118:22-139.178.89.65:46708.service: Deactivated successfully. Aug 5 22:15:13.099275 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:15:13.101134 systemd-logind[1946]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:15:13.103321 systemd-logind[1946]: Removed session 20. Aug 5 22:15:13.124156 systemd[1]: Started sshd@20-172.31.17.118:22-139.178.89.65:46712.service - OpenSSH per-connection server daemon (139.178.89.65:46712). Aug 5 22:15:13.384157 sshd[5921]: Accepted publickey for core from 139.178.89.65 port 46712 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:13.387818 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:13.402093 systemd-logind[1946]: New session 21 of user core. Aug 5 22:15:13.410875 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:15:13.690803 sshd[5921]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:13.699176 systemd-logind[1946]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:15:13.700457 systemd[1]: sshd@20-172.31.17.118:22-139.178.89.65:46712.service: Deactivated successfully. Aug 5 22:15:13.704055 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:15:13.705340 systemd-logind[1946]: Removed session 21. Aug 5 22:15:18.732482 systemd[1]: Started sshd@21-172.31.17.118:22-139.178.89.65:46718.service - OpenSSH per-connection server daemon (139.178.89.65:46718). Aug 5 22:15:18.963608 sshd[5944]: Accepted publickey for core from 139.178.89.65 port 46718 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:18.964411 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:18.983287 systemd-logind[1946]: New session 22 of user core. Aug 5 22:15:18.992049 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:15:19.255172 sshd[5944]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:19.260408 systemd-logind[1946]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:15:19.262218 systemd[1]: sshd@21-172.31.17.118:22-139.178.89.65:46718.service: Deactivated successfully. Aug 5 22:15:19.264880 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:15:19.266313 systemd-logind[1946]: Removed session 22. Aug 5 22:15:24.297353 systemd[1]: Started sshd@22-172.31.17.118:22-139.178.89.65:38708.service - OpenSSH per-connection server daemon (139.178.89.65:38708). Aug 5 22:15:24.504689 sshd[5960]: Accepted publickey for core from 139.178.89.65 port 38708 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:24.509014 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:24.522779 systemd-logind[1946]: New session 23 of user core. Aug 5 22:15:24.536115 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:15:24.942576 sshd[5960]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:24.948442 systemd-logind[1946]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:15:24.949505 systemd[1]: sshd@22-172.31.17.118:22-139.178.89.65:38708.service: Deactivated successfully. Aug 5 22:15:24.953401 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:15:24.955000 systemd-logind[1946]: Removed session 23. Aug 5 22:15:29.980422 systemd[1]: Started sshd@23-172.31.17.118:22-139.178.89.65:38716.service - OpenSSH per-connection server daemon (139.178.89.65:38716). Aug 5 22:15:30.180925 sshd[5978]: Accepted publickey for core from 139.178.89.65 port 38716 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:30.183535 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:30.198139 systemd-logind[1946]: New session 24 of user core. Aug 5 22:15:30.206935 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:15:30.725916 sshd[5978]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:30.737392 systemd[1]: sshd@23-172.31.17.118:22-139.178.89.65:38716.service: Deactivated successfully. Aug 5 22:15:30.745335 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:15:30.757383 systemd-logind[1946]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:15:30.762532 systemd-logind[1946]: Removed session 24. Aug 5 22:15:35.765852 systemd[1]: Started sshd@24-172.31.17.118:22-139.178.89.65:34170.service - OpenSSH per-connection server daemon (139.178.89.65:34170). Aug 5 22:15:35.966729 sshd[6013]: Accepted publickey for core from 139.178.89.65 port 34170 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:35.972274 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:35.992227 systemd-logind[1946]: New session 25 of user core. Aug 5 22:15:36.001895 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:15:36.375718 sshd[6013]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:36.415966 systemd-logind[1946]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:15:36.416860 systemd[1]: sshd@24-172.31.17.118:22-139.178.89.65:34170.service: Deactivated successfully. Aug 5 22:15:36.430973 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:15:36.435160 systemd-logind[1946]: Removed session 25. Aug 5 22:15:41.431438 systemd[1]: Started sshd@25-172.31.17.118:22-139.178.89.65:51364.service - OpenSSH per-connection server daemon (139.178.89.65:51364). Aug 5 22:15:41.625857 sshd[6053]: Accepted publickey for core from 139.178.89.65 port 51364 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:41.628285 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:41.639206 systemd-logind[1946]: New session 26 of user core. Aug 5 22:15:41.650899 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:15:42.066066 sshd[6053]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:42.094700 systemd[1]: sshd@25-172.31.17.118:22-139.178.89.65:51364.service: Deactivated successfully. Aug 5 22:15:42.109142 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:15:42.120564 systemd-logind[1946]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:15:42.122307 systemd-logind[1946]: Removed session 26. Aug 5 22:15:47.099241 systemd[1]: Started sshd@26-172.31.17.118:22-139.178.89.65:51370.service - OpenSSH per-connection server daemon (139.178.89.65:51370). Aug 5 22:15:47.328004 sshd[6073]: Accepted publickey for core from 139.178.89.65 port 51370 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:47.328876 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:47.336839 systemd-logind[1946]: New session 27 of user core. Aug 5 22:15:47.343995 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:15:47.722731 sshd[6073]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:47.734793 systemd[1]: sshd@26-172.31.17.118:22-139.178.89.65:51370.service: Deactivated successfully. Aug 5 22:15:47.742525 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:15:47.748173 systemd-logind[1946]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:15:47.757042 systemd-logind[1946]: Removed session 27. Aug 5 22:15:52.764067 systemd[1]: Started sshd@27-172.31.17.118:22-139.178.89.65:54032.service - OpenSSH per-connection server daemon (139.178.89.65:54032). Aug 5 22:15:53.074829 sshd[6111]: Accepted publickey for core from 139.178.89.65 port 54032 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:53.077600 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:53.098796 systemd-logind[1946]: New session 28 of user core. Aug 5 22:15:53.106029 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:15:54.032720 sshd[6111]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:54.036631 systemd[1]: sshd@27-172.31.17.118:22-139.178.89.65:54032.service: Deactivated successfully. Aug 5 22:15:54.041864 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:15:54.045142 systemd-logind[1946]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:15:54.046628 systemd-logind[1946]: Removed session 28. Aug 5 22:16:07.788279 systemd[1]: cri-containerd-3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b.scope: Deactivated successfully. Aug 5 22:16:07.788968 systemd[1]: cri-containerd-3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b.scope: Consumed 3.740s CPU time, 23.0M memory peak, 0B memory swap peak. Aug 5 22:16:08.024560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b-rootfs.mount: Deactivated successfully. Aug 5 22:16:08.091991 containerd[1970]: time="2024-08-05T22:16:08.022340034Z" level=info msg="shim disconnected" id=3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b namespace=k8s.io Aug 5 22:16:08.091991 containerd[1970]: time="2024-08-05T22:16:08.091899386Z" level=warning msg="cleaning up after shim disconnected" id=3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b namespace=k8s.io Aug 5 22:16:08.091991 containerd[1970]: time="2024-08-05T22:16:08.091919754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:16:08.713782 kubelet[3384]: I0805 22:16:08.712969 3384 scope.go:117] "RemoveContainer" containerID="3397d39af71b8a7760ab7ecb9011fceb810e37e40e8b582496f85b750084a08b" Aug 5 22:16:08.922912 containerd[1970]: time="2024-08-05T22:16:08.922858435Z" level=info msg="CreateContainer within sandbox \"660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 22:16:08.995412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018333184.mount: Deactivated successfully. Aug 5 22:16:09.031503 containerd[1970]: time="2024-08-05T22:16:09.029916701Z" level=info msg="CreateContainer within sandbox \"660e44892a242a1f6955da6746d7a3de1d680203504b9d80393397d16b4c5078\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"265cb5ef101e18aeeef3a5ed87f6641b3c30af9834d03338827dacb9ade5ee0a\"" Aug 5 22:16:09.037752 containerd[1970]: time="2024-08-05T22:16:09.037573734Z" level=info msg="StartContainer for \"265cb5ef101e18aeeef3a5ed87f6641b3c30af9834d03338827dacb9ade5ee0a\"" Aug 5 22:16:09.046984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4181799908.mount: Deactivated successfully. Aug 5 22:16:09.215171 systemd[1]: Started cri-containerd-265cb5ef101e18aeeef3a5ed87f6641b3c30af9834d03338827dacb9ade5ee0a.scope - libcontainer container 265cb5ef101e18aeeef3a5ed87f6641b3c30af9834d03338827dacb9ade5ee0a. Aug 5 22:16:09.333494 systemd[1]: cri-containerd-e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db.scope: Deactivated successfully. Aug 5 22:16:09.333820 systemd[1]: cri-containerd-e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db.scope: Consumed 6.343s CPU time. Aug 5 22:16:09.336274 containerd[1970]: time="2024-08-05T22:16:09.336234045Z" level=info msg="StartContainer for \"265cb5ef101e18aeeef3a5ed87f6641b3c30af9834d03338827dacb9ade5ee0a\" returns successfully" Aug 5 22:16:09.368542 containerd[1970]: time="2024-08-05T22:16:09.368395848Z" level=info msg="shim disconnected" id=e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db namespace=k8s.io Aug 5 22:16:09.368542 containerd[1970]: time="2024-08-05T22:16:09.368480207Z" level=warning msg="cleaning up after shim disconnected" id=e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db namespace=k8s.io Aug 5 22:16:09.368542 containerd[1970]: time="2024-08-05T22:16:09.368496215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:16:09.372180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db-rootfs.mount: Deactivated successfully. Aug 5 22:16:09.647381 kubelet[3384]: I0805 22:16:09.646890 3384 scope.go:117] "RemoveContainer" containerID="e53889fc2540502c8ebeaf8b0bf0baa92fdfd825bca02d11eec053ec12bd22db" Aug 5 22:16:09.676670 containerd[1970]: time="2024-08-05T22:16:09.676589270Z" level=info msg="CreateContainer within sandbox \"673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 22:16:09.749927 containerd[1970]: time="2024-08-05T22:16:09.749822552Z" level=info msg="CreateContainer within sandbox \"673095c1ba3ca85c8c47f60223b7547001cf3ea8459fb688cb3755ff619f36f9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e8f784a54d5809320ef4fe770c27552d52fdcad35c87eddc58904e823ba42296\"" Aug 5 22:16:09.758924 containerd[1970]: time="2024-08-05T22:16:09.756784152Z" level=info msg="StartContainer for \"e8f784a54d5809320ef4fe770c27552d52fdcad35c87eddc58904e823ba42296\"" Aug 5 22:16:09.859911 systemd[1]: Started cri-containerd-e8f784a54d5809320ef4fe770c27552d52fdcad35c87eddc58904e823ba42296.scope - libcontainer container e8f784a54d5809320ef4fe770c27552d52fdcad35c87eddc58904e823ba42296. Aug 5 22:16:09.993020 containerd[1970]: time="2024-08-05T22:16:09.992829258Z" level=info msg="StartContainer for \"e8f784a54d5809320ef4fe770c27552d52fdcad35c87eddc58904e823ba42296\" returns successfully" Aug 5 22:16:10.121613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374375825.mount: Deactivated successfully. Aug 5 22:16:13.903476 systemd[1]: cri-containerd-dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5.scope: Deactivated successfully. Aug 5 22:16:13.906316 systemd[1]: cri-containerd-dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5.scope: Consumed 1.950s CPU time, 18.6M memory peak, 0B memory swap peak. Aug 5 22:16:13.929279 kubelet[3384]: E0805 22:16:13.928910 3384 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 22:16:13.969667 containerd[1970]: time="2024-08-05T22:16:13.969472160Z" level=info msg="shim disconnected" id=dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5 namespace=k8s.io Aug 5 22:16:13.969667 containerd[1970]: time="2024-08-05T22:16:13.969569699Z" level=warning msg="cleaning up after shim disconnected" id=dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5 namespace=k8s.io Aug 5 22:16:13.970949 containerd[1970]: time="2024-08-05T22:16:13.969843371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:16:13.976066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5-rootfs.mount: Deactivated successfully. Aug 5 22:16:14.678822 kubelet[3384]: I0805 22:16:14.678780 3384 scope.go:117] "RemoveContainer" containerID="dc9742a61845139976b8a9418a1f8999e08411ca78c3f7608024a5db5fc33ac5" Aug 5 22:16:14.687413 containerd[1970]: time="2024-08-05T22:16:14.686886410Z" level=info msg="CreateContainer within sandbox \"81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 22:16:14.724691 containerd[1970]: time="2024-08-05T22:16:14.724490042Z" level=info msg="CreateContainer within sandbox \"81f85a00770beaf9a105a041e79ce7f1b35b4c820cca71899e245e78b026bc95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d694640cdf7e454dabbc2b2fc2f518779a9c7892bfe37eed7bdc2d23b4b75b0d\"" Aug 5 22:16:14.726581 containerd[1970]: time="2024-08-05T22:16:14.726530250Z" level=info msg="StartContainer for \"d694640cdf7e454dabbc2b2fc2f518779a9c7892bfe37eed7bdc2d23b4b75b0d\"" Aug 5 22:16:14.794869 systemd[1]: Started cri-containerd-d694640cdf7e454dabbc2b2fc2f518779a9c7892bfe37eed7bdc2d23b4b75b0d.scope - libcontainer container d694640cdf7e454dabbc2b2fc2f518779a9c7892bfe37eed7bdc2d23b4b75b0d. Aug 5 22:16:14.860783 containerd[1970]: time="2024-08-05T22:16:14.860734404Z" level=info msg="StartContainer for \"d694640cdf7e454dabbc2b2fc2f518779a9c7892bfe37eed7bdc2d23b4b75b0d\" returns successfully" Aug 5 22:16:23.929822 kubelet[3384]: E0805 22:16:23.929702 3384 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-118?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"