Oct 9 01:04:56.100850 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:04:56.100893 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:04:56.100908 kernel: BIOS-provided physical RAM map: Oct 9 01:04:56.100919 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 01:04:56.100929 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 01:04:56.100940 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 01:04:56.100955 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Oct 9 01:04:56.100967 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Oct 9 01:04:56.100979 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Oct 9 01:04:56.100991 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 01:04:56.101003 kernel: NX (Execute Disable) protection: active Oct 9 01:04:56.101017 kernel: APIC: Static calls initialized Oct 9 01:04:56.101060 kernel: SMBIOS 2.7 present. Oct 9 01:04:56.101072 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Oct 9 01:04:56.101160 kernel: Hypervisor detected: KVM Oct 9 01:04:56.101174 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:04:56.101186 kernel: kvm-clock: using sched offset of 6453966467 cycles Oct 9 01:04:56.101200 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:04:56.101214 kernel: tsc: Detected 2499.996 MHz processor Oct 9 01:04:56.101229 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:04:56.101243 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:04:56.101261 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Oct 9 01:04:56.101273 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 01:04:56.101286 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:04:56.101298 kernel: Using GB pages for direct mapping Oct 9 01:04:56.101309 kernel: ACPI: Early table checksum verification disabled Oct 9 01:04:56.101321 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Oct 9 01:04:56.101333 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Oct 9 01:04:56.101346 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 9 01:04:56.101360 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Oct 9 01:04:56.101377 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Oct 9 01:04:56.101390 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 9 01:04:56.101403 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 9 01:04:56.101417 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Oct 9 01:04:56.101429 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 9 01:04:56.101440 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Oct 9 01:04:56.101453 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Oct 9 01:04:56.101464 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 9 01:04:56.101477 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Oct 9 01:04:56.101492 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Oct 9 01:04:56.101511 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Oct 9 01:04:56.101525 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Oct 9 01:04:56.101578 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Oct 9 01:04:56.101593 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Oct 9 01:04:56.101612 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Oct 9 01:04:56.101625 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Oct 9 01:04:56.101643 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Oct 9 01:04:56.101656 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Oct 9 01:04:56.101670 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 01:04:56.101683 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 01:04:56.101696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Oct 9 01:04:56.101709 kernel: NUMA: Initialized distance table, cnt=1 Oct 9 01:04:56.101723 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Oct 9 01:04:56.101741 kernel: Zone ranges: Oct 9 01:04:56.101761 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:04:56.101774 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Oct 9 01:04:56.101816 kernel: Normal empty Oct 9 01:04:56.101828 kernel: Movable zone start for each node Oct 9 01:04:56.101840 kernel: Early memory node ranges Oct 9 01:04:56.101852 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 01:04:56.101866 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Oct 9 01:04:56.101879 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Oct 9 01:04:56.101897 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:04:56.101911 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 01:04:56.101926 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Oct 9 01:04:56.101941 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 9 01:04:56.101956 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:04:56.101970 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Oct 9 01:04:56.101985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:04:56.102001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:04:56.102017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:04:56.102035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:04:56.102051 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:04:56.102066 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:04:56.102080 kernel: TSC deadline timer available Oct 9 01:04:56.102094 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 01:04:56.102107 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:04:56.102121 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Oct 9 01:04:56.102134 kernel: Booting paravirtualized kernel on KVM Oct 9 01:04:56.102150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:04:56.102166 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 01:04:56.102184 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 01:04:56.102199 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 01:04:56.102214 kernel: pcpu-alloc: [0] 0 1 Oct 9 01:04:56.102228 kernel: kvm-guest: PV spinlocks enabled Oct 9 01:04:56.102241 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 01:04:56.102256 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:04:56.102271 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:04:56.102288 kernel: random: crng init done Oct 9 01:04:56.102300 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:04:56.102315 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 01:04:56.102329 kernel: Fallback order for Node 0: 0 Oct 9 01:04:56.102342 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Oct 9 01:04:56.102356 kernel: Policy zone: DMA32 Oct 9 01:04:56.102370 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:04:56.102385 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 125152K reserved, 0K cma-reserved) Oct 9 01:04:56.102399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 01:04:56.102416 kernel: Kernel/User page tables isolation: enabled Oct 9 01:04:56.102431 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:04:56.102445 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:04:56.102459 kernel: Dynamic Preempt: voluntary Oct 9 01:04:56.102474 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:04:56.102491 kernel: rcu: RCU event tracing is enabled. Oct 9 01:04:56.102507 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 01:04:56.102523 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:04:56.102539 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:04:56.102555 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:04:56.102575 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:04:56.102590 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 01:04:56.102605 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 01:04:56.102621 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:04:56.102636 kernel: Console: colour VGA+ 80x25 Oct 9 01:04:56.102653 kernel: printk: console [ttyS0] enabled Oct 9 01:04:56.102668 kernel: ACPI: Core revision 20230628 Oct 9 01:04:56.102684 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Oct 9 01:04:56.102699 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:04:56.102718 kernel: x2apic enabled Oct 9 01:04:56.102735 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:04:56.102764 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Oct 9 01:04:56.102803 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Oct 9 01:04:56.103247 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 9 01:04:56.103272 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 9 01:04:56.103318 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:04:56.103335 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:04:56.103351 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:04:56.103365 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:04:56.103379 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 9 01:04:56.103395 kernel: RETBleed: Vulnerable Oct 9 01:04:56.103415 kernel: Speculative Store Bypass: Vulnerable Oct 9 01:04:56.103430 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 01:04:56.103444 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 01:04:56.103459 kernel: GDS: Unknown: Dependent on hypervisor status Oct 9 01:04:56.103474 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:04:56.103488 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:04:56.103507 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:04:56.103522 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 9 01:04:56.103537 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 9 01:04:56.103552 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 9 01:04:56.103569 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 9 01:04:56.103639 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 9 01:04:56.103658 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 9 01:04:56.103676 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:04:56.103694 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 9 01:04:56.103710 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 9 01:04:56.103727 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Oct 9 01:04:56.103748 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Oct 9 01:04:56.103764 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Oct 9 01:04:56.103802 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Oct 9 01:04:56.103816 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Oct 9 01:04:56.103829 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:04:56.103843 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:04:56.103856 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:04:56.103868 kernel: landlock: Up and running. Oct 9 01:04:56.103881 kernel: SELinux: Initializing. Oct 9 01:04:56.103893 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:04:56.103906 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:04:56.103920 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 9 01:04:56.103937 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:04:56.103950 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:04:56.103964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:04:56.106043 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 9 01:04:56.106065 kernel: signal: max sigframe size: 3632 Oct 9 01:04:56.106080 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:04:56.106095 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:04:56.106108 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 01:04:56.106122 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:04:56.106142 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:04:56.106154 kernel: .... node #0, CPUs: #1 Oct 9 01:04:56.106168 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 9 01:04:56.106183 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 9 01:04:56.106197 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 01:04:56.106211 kernel: smpboot: Max logical packages: 1 Oct 9 01:04:56.106225 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Oct 9 01:04:56.106241 kernel: devtmpfs: initialized Oct 9 01:04:56.106258 kernel: x86/mm: Memory block size: 128MB Oct 9 01:04:56.106273 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:04:56.106286 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 01:04:56.106301 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:04:56.106316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:04:56.106332 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:04:56.106345 kernel: audit: type=2000 audit(1728435895.066:1): state=initialized audit_enabled=0 res=1 Oct 9 01:04:56.106360 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:04:56.106372 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:04:56.106389 kernel: cpuidle: using governor menu Oct 9 01:04:56.106403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:04:56.106417 kernel: dca service started, version 1.12.1 Oct 9 01:04:56.106432 kernel: PCI: Using configuration type 1 for base access Oct 9 01:04:56.106445 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:04:56.106459 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:04:56.106473 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:04:56.106487 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:04:56.106501 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:04:56.106518 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:04:56.106532 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:04:56.106546 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:04:56.106560 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:04:56.106574 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 9 01:04:56.106588 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:04:56.106602 kernel: ACPI: Interpreter enabled Oct 9 01:04:56.106617 kernel: ACPI: PM: (supports S0 S5) Oct 9 01:04:56.106630 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:04:56.106644 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:04:56.106661 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:04:56.106674 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 9 01:04:56.106687 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:04:56.106920 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:04:56.107961 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 01:04:56.108141 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 01:04:56.108164 kernel: acpiphp: Slot [3] registered Oct 9 01:04:56.108188 kernel: acpiphp: Slot [4] registered Oct 9 01:04:56.108205 kernel: acpiphp: Slot [5] registered Oct 9 01:04:56.108222 kernel: acpiphp: Slot [6] registered Oct 9 01:04:56.108240 kernel: acpiphp: Slot [7] registered Oct 9 01:04:56.108255 kernel: acpiphp: Slot [8] registered Oct 9 01:04:56.108272 kernel: acpiphp: Slot [9] registered Oct 9 01:04:56.108288 kernel: acpiphp: Slot [10] registered Oct 9 01:04:56.108304 kernel: acpiphp: Slot [11] registered Oct 9 01:04:56.108320 kernel: acpiphp: Slot [12] registered Oct 9 01:04:56.108339 kernel: acpiphp: Slot [13] registered Oct 9 01:04:56.108356 kernel: acpiphp: Slot [14] registered Oct 9 01:04:56.108371 kernel: acpiphp: Slot [15] registered Oct 9 01:04:56.108388 kernel: acpiphp: Slot [16] registered Oct 9 01:04:56.108405 kernel: acpiphp: Slot [17] registered Oct 9 01:04:56.108419 kernel: acpiphp: Slot [18] registered Oct 9 01:04:56.108434 kernel: acpiphp: Slot [19] registered Oct 9 01:04:56.108450 kernel: acpiphp: Slot [20] registered Oct 9 01:04:56.108466 kernel: acpiphp: Slot [21] registered Oct 9 01:04:56.108480 kernel: acpiphp: Slot [22] registered Oct 9 01:04:56.108499 kernel: acpiphp: Slot [23] registered Oct 9 01:04:56.108515 kernel: acpiphp: Slot [24] registered Oct 9 01:04:56.108530 kernel: acpiphp: Slot [25] registered Oct 9 01:04:56.108546 kernel: acpiphp: Slot [26] registered Oct 9 01:04:56.108563 kernel: acpiphp: Slot [27] registered Oct 9 01:04:56.108578 kernel: acpiphp: Slot [28] registered Oct 9 01:04:56.108593 kernel: acpiphp: Slot [29] registered Oct 9 01:04:56.108609 kernel: acpiphp: Slot [30] registered Oct 9 01:04:56.108624 kernel: acpiphp: Slot [31] registered Oct 9 01:04:56.108645 kernel: PCI host bridge to bus 0000:00 Oct 9 01:04:56.108837 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:04:56.108977 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:04:56.109106 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:04:56.109346 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 01:04:56.109483 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:04:56.109642 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 01:04:56.109812 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 01:04:56.109954 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Oct 9 01:04:56.110085 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 9 01:04:56.110216 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 9 01:04:56.111489 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Oct 9 01:04:56.111723 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Oct 9 01:04:56.111886 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Oct 9 01:04:56.112034 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Oct 9 01:04:56.112174 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Oct 9 01:04:56.112343 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Oct 9 01:04:56.112508 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Oct 9 01:04:56.112652 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Oct 9 01:04:56.112812 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 01:04:56.112941 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:04:56.114250 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 9 01:04:56.114400 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Oct 9 01:04:56.114541 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 9 01:04:56.114672 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Oct 9 01:04:56.114693 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:04:56.114709 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:04:56.114733 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:04:56.114748 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:04:56.114763 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 01:04:56.114800 kernel: iommu: Default domain type: Translated Oct 9 01:04:56.114814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:04:56.114832 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:04:56.114848 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:04:56.114862 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 01:04:56.114875 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Oct 9 01:04:56.115021 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Oct 9 01:04:56.115631 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Oct 9 01:04:56.115961 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:04:56.115990 kernel: vgaarb: loaded Oct 9 01:04:56.116047 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Oct 9 01:04:56.116065 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Oct 9 01:04:56.116216 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:04:56.116236 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:04:56.116253 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:04:56.116309 kernel: pnp: PnP ACPI init Oct 9 01:04:56.116328 kernel: pnp: PnP ACPI: found 5 devices Oct 9 01:04:56.116345 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:04:56.116398 kernel: NET: Registered PF_INET protocol family Oct 9 01:04:56.116417 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:04:56.116435 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 01:04:56.116488 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:04:56.116506 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 01:04:56.116553 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 01:04:56.116580 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 01:04:56.116597 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:04:56.116648 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:04:56.116669 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:04:56.116686 kernel: NET: Registered PF_XDP protocol family Oct 9 01:04:56.117038 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:04:56.117237 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:04:56.117357 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:04:56.117478 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 01:04:56.117613 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 01:04:56.117634 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:04:56.117650 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 01:04:56.117666 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Oct 9 01:04:56.117681 kernel: clocksource: Switched to clocksource tsc Oct 9 01:04:56.117696 kernel: Initialise system trusted keyrings Oct 9 01:04:56.117712 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 01:04:56.117731 kernel: Key type asymmetric registered Oct 9 01:04:56.117746 kernel: Asymmetric key parser 'x509' registered Oct 9 01:04:56.117762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:04:56.117811 kernel: io scheduler mq-deadline registered Oct 9 01:04:56.117825 kernel: io scheduler kyber registered Oct 9 01:04:56.117839 kernel: io scheduler bfq registered Oct 9 01:04:56.117852 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:04:56.117867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:04:56.117882 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:04:56.117901 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:04:56.117917 kernel: i8042: Warning: Keylock active Oct 9 01:04:56.117932 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:04:56.117947 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:04:56.118096 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 9 01:04:56.118226 kernel: rtc_cmos 00:00: registered as rtc0 Oct 9 01:04:56.118353 kernel: rtc_cmos 00:00: setting system clock to 2024-10-09T01:04:55 UTC (1728435895) Oct 9 01:04:56.118476 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 9 01:04:56.118500 kernel: intel_pstate: CPU model not supported Oct 9 01:04:56.118516 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:04:56.118532 kernel: Segment Routing with IPv6 Oct 9 01:04:56.118548 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:04:56.118564 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:04:56.118580 kernel: Key type dns_resolver registered Oct 9 01:04:56.118594 kernel: IPI shorthand broadcast: enabled Oct 9 01:04:56.118610 kernel: sched_clock: Marking stable (617003770, 315960629)->(1063444466, -130480067) Oct 9 01:04:56.118625 kernel: registered taskstats version 1 Oct 9 01:04:56.118644 kernel: Loading compiled-in X.509 certificates Oct 9 01:04:56.118659 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:04:56.118675 kernel: Key type .fscrypt registered Oct 9 01:04:56.118690 kernel: Key type fscrypt-provisioning registered Oct 9 01:04:56.118706 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:04:56.118723 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:04:56.118738 kernel: ima: No architecture policies found Oct 9 01:04:56.118751 kernel: clk: Disabling unused clocks Oct 9 01:04:56.118766 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:04:56.118826 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:04:56.118843 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:04:56.118859 kernel: Run /init as init process Oct 9 01:04:56.118876 kernel: with arguments: Oct 9 01:04:56.118892 kernel: /init Oct 9 01:04:56.118908 kernel: with environment: Oct 9 01:04:56.118924 kernel: HOME=/ Oct 9 01:04:56.118940 kernel: TERM=linux Oct 9 01:04:56.118956 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:04:56.118983 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:04:56.119017 systemd[1]: Detected virtualization amazon. Oct 9 01:04:56.119035 systemd[1]: Detected architecture x86-64. Oct 9 01:04:56.119052 systemd[1]: Running in initrd. Oct 9 01:04:56.119070 systemd[1]: No hostname configured, using default hostname. Oct 9 01:04:56.119090 systemd[1]: Hostname set to . Oct 9 01:04:56.119109 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:04:56.119127 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:04:56.119146 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:04:56.119164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:04:56.119183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:04:56.119204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:04:56.119222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:04:56.119244 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:04:56.119264 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:04:56.119283 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:04:56.119301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:04:56.119319 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:04:56.119336 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:04:56.119358 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:04:56.119375 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:04:56.119393 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:04:56.119411 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:04:56.119429 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:04:56.119447 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:04:56.119469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:04:56.119490 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:04:56.119508 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:04:56.119525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:04:56.119542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:04:56.119563 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:04:56.119648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:04:56.119988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:04:56.120016 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:04:56.120035 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:04:56.120058 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:04:56.120079 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:04:56.120098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:56.120115 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:04:56.120166 systemd-journald[177]: Collecting audit messages is disabled. Oct 9 01:04:56.120211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:04:56.120230 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:04:56.120250 systemd-journald[177]: Journal started Oct 9 01:04:56.120290 systemd-journald[177]: Runtime Journal (/run/log/journal/ec2eb3c7ce9b944d31c6a4926cef5280) is 4.8M, max 38.6M, 33.7M free. Oct 9 01:04:56.134810 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:04:56.136828 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:04:56.150527 systemd-modules-load[179]: Inserted module 'overlay' Oct 9 01:04:56.316691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:04:56.316729 kernel: Bridge firewalling registered Oct 9 01:04:56.156975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:04:56.213484 systemd-modules-load[179]: Inserted module 'br_netfilter' Oct 9 01:04:56.318716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:04:56.319283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:04:56.326305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:04:56.347393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:04:56.362211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:56.381012 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:04:56.403974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:04:56.409334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:04:56.435322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:04:56.438213 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:04:56.440468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:56.478106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:04:56.495932 dracut-cmdline[215]: dracut-dracut-053 Oct 9 01:04:56.499950 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:04:56.510120 systemd-resolved[204]: Positive Trust Anchors: Oct 9 01:04:56.510140 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:04:56.510187 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:04:56.524678 systemd-resolved[204]: Defaulting to hostname 'linux'. Oct 9 01:04:56.527197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:04:56.529467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:04:56.586812 kernel: SCSI subsystem initialized Oct 9 01:04:56.595804 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:04:56.606803 kernel: iscsi: registered transport (tcp) Oct 9 01:04:56.629105 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:04:56.629202 kernel: QLogic iSCSI HBA Driver Oct 9 01:04:56.669471 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:04:56.675980 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:04:56.712531 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:04:56.712613 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:04:56.712635 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:04:56.767712 kernel: raid6: avx512x4 gen() 13116 MB/s Oct 9 01:04:56.785818 kernel: raid6: avx512x2 gen() 4746 MB/s Oct 9 01:04:56.802808 kernel: raid6: avx512x1 gen() 6843 MB/s Oct 9 01:04:56.821824 kernel: raid6: avx2x4 gen() 6966 MB/s Oct 9 01:04:56.838868 kernel: raid6: avx2x2 gen() 12625 MB/s Oct 9 01:04:56.855866 kernel: raid6: avx2x1 gen() 9971 MB/s Oct 9 01:04:56.856099 kernel: raid6: using algorithm avx512x4 gen() 13116 MB/s Oct 9 01:04:56.875156 kernel: raid6: .... xor() 5563 MB/s, rmw enabled Oct 9 01:04:56.875282 kernel: raid6: using avx512x2 recovery algorithm Oct 9 01:04:56.901806 kernel: xor: automatically using best checksumming function avx Oct 9 01:04:57.143808 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:04:57.158900 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:04:57.164189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:04:57.186315 systemd-udevd[398]: Using default interface naming scheme 'v255'. Oct 9 01:04:57.192694 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:04:57.201022 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:04:57.229702 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Oct 9 01:04:57.271463 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:04:57.276988 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:04:57.386196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:04:57.398607 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:04:57.465151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:04:57.471501 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:04:57.474264 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:04:57.478517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:04:57.489440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:04:57.531808 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:04:57.537865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:04:57.552806 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:04:57.552869 kernel: AES CTR mode by8 optimization enabled Oct 9 01:04:57.559745 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 9 01:04:57.560029 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 01:04:57.576334 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 9 01:04:57.576625 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 9 01:04:57.579159 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 9 01:04:57.582906 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Oct 9 01:04:57.583621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:04:57.596917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:04:57.596955 kernel: GPT:9289727 != 16777215 Oct 9 01:04:57.597119 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:04:57.597155 kernel: GPT:9289727 != 16777215 Oct 9 01:04:57.597174 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:04:57.597202 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 01:04:57.583807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:57.601904 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:21:e2:c4:a4:cd Oct 9 01:04:57.585404 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:04:57.588758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:04:57.588955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:57.591618 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:57.609297 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:04:57.611911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:57.735804 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (453) Oct 9 01:04:57.743835 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (459) Oct 9 01:04:57.780678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 9 01:04:57.820866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:57.830999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:04:57.866900 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 9 01:04:57.867014 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 9 01:04:57.890969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 9 01:04:57.893277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:57.918701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 9 01:04:57.929120 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:04:57.946823 disk-uuid[631]: Primary Header is updated. Oct 9 01:04:57.946823 disk-uuid[631]: Secondary Entries is updated. Oct 9 01:04:57.946823 disk-uuid[631]: Secondary Header is updated. Oct 9 01:04:57.956872 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 01:04:57.969893 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 01:04:57.978920 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 01:04:58.977154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 01:04:58.978634 disk-uuid[632]: The operation has completed successfully. Oct 9 01:04:59.118144 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:04:59.118288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:04:59.152129 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:04:59.158633 sh[975]: Success Oct 9 01:04:59.190968 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 01:04:59.301978 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:04:59.310912 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:04:59.314593 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:04:59.355454 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:04:59.355554 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:04:59.355575 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:04:59.356286 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:04:59.356891 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:04:59.469819 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 9 01:04:59.496373 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:04:59.499944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:04:59.515110 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:04:59.530368 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:04:59.555453 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:59.555537 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:04:59.555562 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 9 01:04:59.559815 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 9 01:04:59.574029 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:04:59.576342 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:59.611766 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:04:59.624101 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:04:59.719904 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:04:59.747084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:04:59.870383 systemd-networkd[1167]: lo: Link UP Oct 9 01:04:59.870437 systemd-networkd[1167]: lo: Gained carrier Oct 9 01:04:59.876390 systemd-networkd[1167]: Enumeration completed Oct 9 01:04:59.878082 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:04:59.879902 systemd[1]: Reached target network.target - Network. Oct 9 01:04:59.880984 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:59.880989 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:04:59.890863 systemd-networkd[1167]: eth0: Link UP Oct 9 01:04:59.890871 systemd-networkd[1167]: eth0: Gained carrier Oct 9 01:04:59.890973 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:59.912954 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.16.164/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 9 01:05:00.083192 ignition[1094]: Ignition 2.19.0 Oct 9 01:05:00.083203 ignition[1094]: Stage: fetch-offline Oct 9 01:05:00.083389 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:00.086460 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:05:00.083398 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:00.083809 ignition[1094]: Ignition finished successfully Oct 9 01:05:00.101096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 01:05:00.131752 ignition[1176]: Ignition 2.19.0 Oct 9 01:05:00.131766 ignition[1176]: Stage: fetch Oct 9 01:05:00.132990 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:00.133004 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:00.133117 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:00.143966 ignition[1176]: PUT result: OK Oct 9 01:05:00.148116 ignition[1176]: parsed url from cmdline: "" Oct 9 01:05:00.148130 ignition[1176]: no config URL provided Oct 9 01:05:00.148142 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:05:00.148158 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:05:00.148193 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:00.150226 ignition[1176]: PUT result: OK Oct 9 01:05:00.150292 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 9 01:05:00.156217 ignition[1176]: GET result: OK Oct 9 01:05:00.156301 ignition[1176]: parsing config with SHA512: 1ccbf308e8d83bdb5f1768d4570d701b8198604a6c4fa82f87f7e101daad6082891b4482f72f75ad56191e1d572c6a676fb7b848f063884c1dd7b3d64b615350 Oct 9 01:05:00.161988 unknown[1176]: fetched base config from "system" Oct 9 01:05:00.161995 unknown[1176]: fetched base config from "system" Oct 9 01:05:00.162609 ignition[1176]: fetch: fetch complete Oct 9 01:05:00.162000 unknown[1176]: fetched user config from "aws" Oct 9 01:05:00.162615 ignition[1176]: fetch: fetch passed Oct 9 01:05:00.165069 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 01:05:00.162681 ignition[1176]: Ignition finished successfully Oct 9 01:05:00.179264 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:05:00.204098 ignition[1182]: Ignition 2.19.0 Oct 9 01:05:00.204112 ignition[1182]: Stage: kargs Oct 9 01:05:00.204607 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:00.204621 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:00.205012 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:00.206547 ignition[1182]: PUT result: OK Oct 9 01:05:00.224147 ignition[1182]: kargs: kargs passed Oct 9 01:05:00.224344 ignition[1182]: Ignition finished successfully Oct 9 01:05:00.231826 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:05:00.238226 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:05:00.273204 ignition[1188]: Ignition 2.19.0 Oct 9 01:05:00.273217 ignition[1188]: Stage: disks Oct 9 01:05:00.273820 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:00.273835 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:00.273950 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:00.277980 ignition[1188]: PUT result: OK Oct 9 01:05:00.294322 ignition[1188]: disks: disks passed Oct 9 01:05:00.295209 ignition[1188]: Ignition finished successfully Oct 9 01:05:00.298295 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:05:00.306644 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:05:00.308642 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:05:00.315633 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:05:00.316793 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:05:00.319698 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:05:00.326976 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:05:00.402655 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:05:00.409950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:05:00.422194 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:05:00.566810 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:05:00.567351 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:05:00.568213 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:05:00.589295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:05:00.593091 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:05:00.596132 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:05:00.596336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:05:00.596371 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:05:00.610900 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1215) Oct 9 01:05:00.614348 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:05:00.616556 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:05:00.616675 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 9 01:05:00.629299 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 9 01:05:00.654328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:05:00.659097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:05:00.683057 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:05:01.193866 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:05:01.225406 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:05:01.249687 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:05:01.263528 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:05:01.669284 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:05:01.682055 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:05:01.687057 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:05:01.697576 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:05:01.698770 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:05:01.736370 ignition[1328]: INFO : Ignition 2.19.0 Oct 9 01:05:01.737585 ignition[1328]: INFO : Stage: mount Oct 9 01:05:01.736836 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:05:01.743084 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:01.743084 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:01.747273 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:01.750608 ignition[1328]: INFO : PUT result: OK Oct 9 01:05:01.755669 ignition[1328]: INFO : mount: mount passed Oct 9 01:05:01.756755 ignition[1328]: INFO : Ignition finished successfully Oct 9 01:05:01.757661 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:05:01.774795 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:05:01.822112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:05:01.842448 systemd-networkd[1167]: eth0: Gained IPv6LL Oct 9 01:05:01.861912 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1340) Oct 9 01:05:01.865808 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:05:01.866013 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:05:01.868688 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 9 01:05:01.884152 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 9 01:05:01.893345 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:05:01.999376 ignition[1357]: INFO : Ignition 2.19.0 Oct 9 01:05:02.002668 ignition[1357]: INFO : Stage: files Oct 9 01:05:02.002668 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:02.002668 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:02.002668 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:02.029714 ignition[1357]: INFO : PUT result: OK Oct 9 01:05:02.064890 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:05:02.068898 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:05:02.068898 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:05:02.127271 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:05:02.130396 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:05:02.132695 unknown[1357]: wrote ssh authorized keys file for user: core Oct 9 01:05:02.139218 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:05:02.145308 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:05:02.149597 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:05:02.232640 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:05:02.392024 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:05:02.402274 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:05:02.446117 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:05:02.446117 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:05:02.446117 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:05:02.446117 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:05:02.446117 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:05:02.446117 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 9 01:05:02.740665 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:05:03.344764 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 01:05:03.344764 ignition[1357]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:05:03.352953 ignition[1357]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:05:03.356281 ignition[1357]: INFO : files: files passed Oct 9 01:05:03.356281 ignition[1357]: INFO : Ignition finished successfully Oct 9 01:05:03.357047 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:05:03.368000 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:05:03.379086 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:05:03.383483 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:05:03.383578 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:05:03.423622 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:05:03.423622 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:05:03.428836 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:05:03.434520 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:05:03.435372 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:05:03.447524 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:05:03.488167 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:05:03.488298 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:05:03.491561 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:05:03.493264 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:05:03.494597 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:05:03.499557 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:05:03.518962 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:05:03.526998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:05:03.540392 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:05:03.540619 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:05:03.545944 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:05:03.547988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:05:03.549353 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:05:03.562355 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:05:03.568574 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:05:03.572513 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:05:03.575415 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:05:03.577362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:05:03.581462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:05:03.587306 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:05:03.591124 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:05:03.593602 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:05:03.595057 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:05:03.602910 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:05:03.603111 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:05:03.606667 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:05:03.609033 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:05:03.610604 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:05:03.613672 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:05:03.616595 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:05:03.617882 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:05:03.620504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:05:03.622123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:05:03.625476 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:05:03.625965 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:05:03.633046 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:05:03.641323 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:05:03.642755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:05:03.644282 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:05:03.649056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:05:03.649269 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:05:03.661879 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:05:03.663542 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:05:03.679083 ignition[1410]: INFO : Ignition 2.19.0 Oct 9 01:05:03.688539 ignition[1410]: INFO : Stage: umount Oct 9 01:05:03.688539 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:05:03.688539 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 01:05:03.688539 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 01:05:03.695372 ignition[1410]: INFO : PUT result: OK Oct 9 01:05:03.697935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:05:03.700162 ignition[1410]: INFO : umount: umount passed Oct 9 01:05:03.705626 ignition[1410]: INFO : Ignition finished successfully Oct 9 01:05:03.706329 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:05:03.706749 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:05:03.712476 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:05:03.712577 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:05:03.715774 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:05:03.716430 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:05:03.721073 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:05:03.722702 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:05:03.724474 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 01:05:03.724530 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 01:05:03.727378 systemd[1]: Stopped target network.target - Network. Oct 9 01:05:03.727433 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:05:03.727488 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:05:03.730901 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:05:03.733231 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:05:03.740131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:05:03.740686 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:05:03.744137 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:05:03.747040 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:05:03.747365 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:05:03.749447 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:05:03.749507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:05:03.755678 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:05:03.755845 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:05:03.758408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:05:03.758479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:05:03.760968 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:05:03.761026 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:05:03.762812 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:05:03.764365 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:05:03.772222 systemd-networkd[1167]: eth0: DHCPv6 lease lost Oct 9 01:05:03.782054 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:05:03.782219 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:05:03.794588 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:05:03.794867 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:05:03.803693 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:05:03.803761 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:05:03.812970 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:05:03.814568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:05:03.814739 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:05:03.818221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:05:03.818341 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:05:03.837608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:05:03.837692 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:05:03.840982 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:05:03.841063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:05:03.843029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:05:03.861448 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:05:03.862718 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:05:03.868631 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:05:03.868872 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:05:03.873877 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:05:03.874018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:05:03.877046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:05:03.877169 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:05:03.884042 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:05:03.884167 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:05:03.890604 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:05:03.890809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:05:03.914815 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:05:03.922721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:05:03.922834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:05:03.927368 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:05:03.927451 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:05:03.930352 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:05:03.930411 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:05:03.931678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:05:03.931723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:05:03.935987 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:05:03.937469 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:05:03.944855 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:05:03.944972 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:05:03.949940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:05:03.956052 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:05:03.973476 systemd[1]: Switching root. Oct 9 01:05:04.019464 systemd-journald[177]: Journal stopped Oct 9 01:05:06.294839 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Oct 9 01:05:06.294935 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:05:06.294959 kernel: SELinux: policy capability open_perms=1 Oct 9 01:05:06.294979 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:05:06.294998 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:05:06.295014 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:05:06.295037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:05:06.295054 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:05:06.295073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:05:06.295096 kernel: audit: type=1403 audit(1728435904.712:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:05:06.295120 systemd[1]: Successfully loaded SELinux policy in 79.639ms. Oct 9 01:05:06.295154 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.498ms. Oct 9 01:05:06.295174 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:05:06.295193 systemd[1]: Detected virtualization amazon. Oct 9 01:05:06.295220 systemd[1]: Detected architecture x86-64. Oct 9 01:05:06.295244 systemd[1]: Detected first boot. Oct 9 01:05:06.295263 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:05:06.295284 zram_generator::config[1453]: No configuration found. Oct 9 01:05:06.295309 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:05:06.295330 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:05:06.295352 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:05:06.295372 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:05:06.295397 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:05:06.295421 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:05:06.295438 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:05:06.295457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:05:06.295478 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:05:06.295499 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:05:06.295521 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:05:06.295543 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:05:06.295564 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:05:06.295587 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:05:06.295610 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:05:06.295629 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:05:06.295649 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:05:06.295677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:05:06.295698 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:05:06.295719 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:05:06.295742 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:05:06.295764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:05:06.295822 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:05:06.295845 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:05:06.295867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:05:06.295890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:05:06.295972 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:05:06.295995 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:05:06.296016 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:05:06.296038 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:05:06.296062 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:05:06.296082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:05:06.296103 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:05:06.296123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:05:06.296145 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:05:06.296170 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:05:06.296193 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:05:06.296216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:06.296239 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:05:06.296266 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:05:06.296288 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:05:06.296311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:05:06.296334 systemd[1]: Reached target machines.target - Containers. Oct 9 01:05:06.296356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:05:06.296379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:05:06.296401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:05:06.296423 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:05:06.296449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:05:06.296471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:05:06.296495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:05:06.296517 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:05:06.296538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:05:06.296561 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:05:06.296583 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:05:06.296606 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:05:06.296628 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:05:06.296656 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:05:06.296677 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:05:06.296698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:05:06.296720 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:05:06.296744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:05:06.296766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:05:06.297932 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:05:06.297962 systemd[1]: Stopped verity-setup.service. Oct 9 01:05:06.297985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:06.298013 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:05:06.298103 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:05:06.298127 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:05:06.298148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:05:06.298170 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:05:06.298196 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:05:06.298218 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:05:06.298241 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:05:06.298263 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:05:06.298286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:05:06.298307 kernel: loop: module loaded Oct 9 01:05:06.298329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:05:06.298350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:05:06.298375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:05:06.298397 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:05:06.298420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:05:06.298445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:05:06.298467 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:05:06.298492 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:05:06.298514 kernel: fuse: init (API version 7.39) Oct 9 01:05:06.298571 systemd-journald[1528]: Collecting audit messages is disabled. Oct 9 01:05:06.298611 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:05:06.298633 systemd-journald[1528]: Journal started Oct 9 01:05:06.298674 systemd-journald[1528]: Runtime Journal (/run/log/journal/ec2eb3c7ce9b944d31c6a4926cef5280) is 4.8M, max 38.6M, 33.7M free. Oct 9 01:05:06.313574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:05:06.313654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:05:06.313682 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:05:06.313705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:05:06.313728 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:05:05.647094 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:05:05.696655 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 9 01:05:05.697168 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:05:06.322999 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:05:06.326424 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:05:06.364909 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:05:06.368930 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:05:06.368985 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:05:06.378103 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:05:06.388264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:05:06.392172 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:05:06.393734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:05:06.397961 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:05:06.402129 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:05:06.403733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:05:06.414301 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:05:06.440824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:05:06.446020 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:05:06.450112 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:05:06.452025 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:05:06.527965 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:05:06.533900 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:05:06.537330 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:05:06.548130 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:05:06.558290 systemd-journald[1528]: Time spent on flushing to /var/log/journal/ec2eb3c7ce9b944d31c6a4926cef5280 is 73.583ms for 961 entries. Oct 9 01:05:06.558290 systemd-journald[1528]: System Journal (/var/log/journal/ec2eb3c7ce9b944d31c6a4926cef5280) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:05:06.639737 systemd-journald[1528]: Received client request to flush runtime journal. Oct 9 01:05:06.639831 kernel: ACPI: bus type drm_connector registered Oct 9 01:05:06.639865 kernel: loop0: detected capacity change from 0 to 210664 Oct 9 01:05:06.555550 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Oct 9 01:05:06.555579 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Oct 9 01:05:06.556564 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:05:06.572596 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:05:06.577720 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:05:06.579085 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:05:06.593860 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:05:06.603846 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:05:06.645923 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:05:06.695627 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:05:06.704702 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 01:05:06.705585 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:05:06.715744 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:05:06.743018 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:05:06.756347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:05:06.759027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:05:06.764938 kernel: loop1: detected capacity change from 0 to 62848 Oct 9 01:05:06.798213 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Oct 9 01:05:06.798632 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Oct 9 01:05:06.810694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:05:06.881833 kernel: loop2: detected capacity change from 0 to 138192 Oct 9 01:05:07.014429 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 01:05:07.133186 kernel: loop4: detected capacity change from 0 to 210664 Oct 9 01:05:07.158815 kernel: loop5: detected capacity change from 0 to 62848 Oct 9 01:05:07.174810 kernel: loop6: detected capacity change from 0 to 138192 Oct 9 01:05:07.229824 kernel: loop7: detected capacity change from 0 to 140992 Oct 9 01:05:07.266134 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 9 01:05:07.268728 (sd-merge)[1608]: Merged extensions into '/usr'. Oct 9 01:05:07.274148 systemd[1]: Reloading requested from client PID 1580 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:05:07.275484 systemd[1]: Reloading... Oct 9 01:05:07.419814 zram_generator::config[1634]: No configuration found. Oct 9 01:05:07.791878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:05:07.900173 ldconfig[1572]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:05:07.904611 systemd[1]: Reloading finished in 628 ms. Oct 9 01:05:07.934553 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:05:07.936367 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:05:07.949129 systemd[1]: Starting ensure-sysext.service... Oct 9 01:05:07.957020 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:05:07.983073 systemd[1]: Reloading requested from client PID 1684 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:05:07.983093 systemd[1]: Reloading... Oct 9 01:05:07.986151 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:05:07.986815 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:05:07.988801 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:05:07.989440 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Oct 9 01:05:07.989616 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Oct 9 01:05:07.997884 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:05:07.998067 systemd-tmpfiles[1685]: Skipping /boot Oct 9 01:05:08.020571 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:05:08.020687 systemd-tmpfiles[1685]: Skipping /boot Oct 9 01:05:08.116810 zram_generator::config[1715]: No configuration found. Oct 9 01:05:08.260461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:05:08.322583 systemd[1]: Reloading finished in 339 ms. Oct 9 01:05:08.343656 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:05:08.356619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:05:08.376172 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:05:08.381573 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:05:08.394329 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:05:08.409427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:05:08.415213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:05:08.427300 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:05:08.440104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:08.440631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:05:08.452585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:05:08.474472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:05:08.491090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:05:08.493168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:05:08.493376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:08.496412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:05:08.497082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:05:08.517263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:05:08.533517 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:08.535333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:05:08.546187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:05:08.548520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:05:08.549202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:08.580866 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:05:08.584817 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:05:08.604737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:08.605188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:05:08.607301 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:05:08.609061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:05:08.609275 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:05:08.628129 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:05:08.629356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:05:08.630097 systemd[1]: Finished ensure-sysext.service. Oct 9 01:05:08.632353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:05:08.632565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:05:08.650391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:05:08.661082 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:05:08.661370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:05:08.665987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:05:08.668855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:05:08.671392 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:05:08.671607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:05:08.673679 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:05:08.680702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:05:08.681684 systemd-udevd[1771]: Using default interface naming scheme 'v255'. Oct 9 01:05:08.696215 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:05:08.698279 augenrules[1804]: No rules Oct 9 01:05:08.700330 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:05:08.702754 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:05:08.703189 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:05:08.732033 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:05:08.756524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:05:08.768136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:05:08.879505 (udev-worker)[1827]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:05:08.883803 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1832) Oct 9 01:05:08.890806 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1832) Oct 9 01:05:08.998272 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 01:05:09.057621 systemd-networkd[1817]: lo: Link UP Oct 9 01:05:09.057634 systemd-networkd[1817]: lo: Gained carrier Oct 9 01:05:09.059295 systemd-networkd[1817]: Enumeration completed Oct 9 01:05:09.060970 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:05:09.065259 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:05:09.065269 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:05:09.068818 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 01:05:09.070176 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:05:09.073066 systemd-resolved[1767]: Positive Trust Anchors: Oct 9 01:05:09.073087 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:05:09.073234 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:05:09.075843 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:05:09.075383 systemd-networkd[1817]: eth0: Link UP Oct 9 01:05:09.075654 systemd-networkd[1817]: eth0: Gained carrier Oct 9 01:05:09.075682 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:05:09.082299 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 9 01:05:09.084574 systemd-resolved[1767]: Defaulting to hostname 'linux'. Oct 9 01:05:09.086358 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.16.164/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 9 01:05:09.087459 kernel: ACPI: button: Sleep Button [SLPF] Oct 9 01:05:09.090848 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:05:09.093337 systemd[1]: Reached target network.target - Network. Oct 9 01:05:09.094473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:05:09.099809 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1820) Oct 9 01:05:09.111809 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Oct 9 01:05:09.171509 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:05:09.250847 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 01:05:09.338845 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:05:09.361953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:05:09.378614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 9 01:05:09.390158 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:05:09.394713 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:05:09.405931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:05:09.431854 lvm[1932]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:05:09.439002 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:05:09.466355 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:05:09.467603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:05:09.475049 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:05:09.496553 lvm[1936]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:05:09.528145 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:05:09.625747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:05:09.628570 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:05:09.630721 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:05:09.632522 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:05:09.634238 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:05:09.635684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:05:09.637368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:05:09.640474 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:05:09.640514 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:05:09.642724 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:05:09.647218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:05:09.651878 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:05:09.666836 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:05:09.671461 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:05:09.674001 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:05:09.675359 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:05:09.677033 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:05:09.677064 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:05:09.683022 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:05:09.693174 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 01:05:09.697913 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:05:09.701914 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:05:09.715113 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:05:09.716540 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:05:09.726892 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:05:09.739770 systemd[1]: Started ntpd.service - Network Time Service. Oct 9 01:05:09.751087 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:05:09.777390 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 9 01:05:09.792052 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:05:09.797985 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:05:09.815300 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:05:09.817942 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:05:09.819922 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:05:09.826056 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:05:09.832502 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:05:09.846136 jq[1946]: false Oct 9 01:05:09.844434 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:05:09.844691 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:05:09.846589 jq[1960]: true Oct 9 01:05:09.910774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:05:09.912312 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:05:09.939811 update_engine[1958]: I20241009 01:05:09.938511 1958 main.cc:92] Flatcar Update Engine starting Oct 9 01:05:09.940266 extend-filesystems[1947]: Found loop4 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found loop5 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found loop6 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found loop7 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found nvme0n1 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found nvme0n1p1 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found nvme0n1p2 Oct 9 01:05:09.940266 extend-filesystems[1947]: Found nvme0n1p3 Oct 9 01:05:09.977025 extend-filesystems[1947]: Found usr Oct 9 01:05:09.977025 extend-filesystems[1947]: Found nvme0n1p4 Oct 9 01:05:09.977025 extend-filesystems[1947]: Found nvme0n1p6 Oct 9 01:05:09.977025 extend-filesystems[1947]: Found nvme0n1p7 Oct 9 01:05:09.977025 extend-filesystems[1947]: Found nvme0n1p9 Oct 9 01:05:09.977025 extend-filesystems[1947]: Checking size of /dev/nvme0n1p9 Oct 9 01:05:09.984680 jq[1965]: true Oct 9 01:05:09.952397 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:05:10.008448 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:05:10.010529 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:05:10.034577 (ntainerd)[1978]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 23:08:10 UTC 2024 (1): Starting Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: ---------------------------------------------------- Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: corporation. Support and training for ntp-4 are Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: available at https://www.nwtime.org/support Oct 9 01:05:10.038986 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: ---------------------------------------------------- Oct 9 01:05:10.037409 ntpd[1949]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 23:08:10 UTC 2024 (1): Starting Oct 9 01:05:10.037501 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 9 01:05:10.037515 ntpd[1949]: ---------------------------------------------------- Oct 9 01:05:10.037589 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Oct 9 01:05:10.037605 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 9 01:05:10.037615 ntpd[1949]: corporation. Support and training for ntp-4 are Oct 9 01:05:10.037625 ntpd[1949]: available at https://www.nwtime.org/support Oct 9 01:05:10.037636 ntpd[1949]: ---------------------------------------------------- Oct 9 01:05:10.042181 dbus-daemon[1945]: [system] SELinux support is enabled Oct 9 01:05:10.044606 ntpd[1949]: proto: precision = 0.063 usec (-24) Oct 9 01:05:10.061968 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: proto: precision = 0.063 usec (-24) Oct 9 01:05:10.061968 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: basedate set to 2024-09-26 Oct 9 01:05:10.061968 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: gps base set to 2024-09-29 (week 2334) Oct 9 01:05:10.053645 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:05:10.044957 ntpd[1949]: basedate set to 2024-09-26 Oct 9 01:05:10.044970 ntpd[1949]: gps base set to 2024-09-29 (week 2334) Oct 9 01:05:10.064010 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:05:10.070125 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Oct 9 01:05:10.070125 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 9 01:05:10.068075 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Oct 9 01:05:10.064059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:05:10.068133 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 9 01:05:10.068439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:05:10.068469 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:05:10.071958 dbus-daemon[1945]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 9 01:05:10.073543 update_engine[1958]: I20241009 01:05:10.073082 1958 update_check_scheduler.cc:74] Next update check in 3m38s Oct 9 01:05:10.075271 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Listen normally on 3 eth0 172.31.16.164:123 Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Listen normally on 4 lo [::1]:123 Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: bind(21) AF_INET6 fe80::421:e2ff:fec4:a4cd%2#123 flags 0x11 failed: Cannot assign requested address Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: unable to create socket on eth0 (5) for fe80::421:e2ff:fec4:a4cd%2#123 Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: failed to init interface for address fe80::421:e2ff:fec4:a4cd%2 Oct 9 01:05:10.081004 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: Listening on routing socket on fd #21 for interface updates Oct 9 01:05:10.074323 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Oct 9 01:05:10.077322 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 9 01:05:10.074371 ntpd[1949]: Listen normally on 3 eth0 172.31.16.164:123 Oct 9 01:05:10.074412 ntpd[1949]: Listen normally on 4 lo [::1]:123 Oct 9 01:05:10.074461 ntpd[1949]: bind(21) AF_INET6 fe80::421:e2ff:fec4:a4cd%2#123 flags 0x11 failed: Cannot assign requested address Oct 9 01:05:10.074480 ntpd[1949]: unable to create socket on eth0 (5) for fe80::421:e2ff:fec4:a4cd%2#123 Oct 9 01:05:10.074496 ntpd[1949]: failed to init interface for address fe80::421:e2ff:fec4:a4cd%2 Oct 9 01:05:10.074533 ntpd[1949]: Listening on routing socket on fd #21 for interface updates Oct 9 01:05:10.097992 extend-filesystems[1947]: Resized partition /dev/nvme0n1p9 Oct 9 01:05:10.096677 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 9 01:05:10.091262 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:05:10.102933 extend-filesystems[2000]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:05:10.117093 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 9 01:05:10.116991 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 9 01:05:10.118295 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 01:05:10.118295 ntpd[1949]: 9 Oct 01:05:10 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 01:05:10.113297 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 01:05:10.115986 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 01:05:10.134524 tar[1974]: linux-amd64/helm Oct 9 01:05:10.139308 coreos-metadata[1944]: Oct 09 01:05:10.139 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 9 01:05:10.145957 coreos-metadata[1944]: Oct 09 01:05:10.143 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 9 01:05:10.146747 coreos-metadata[1944]: Oct 09 01:05:10.146 INFO Fetch successful Oct 9 01:05:10.150858 coreos-metadata[1944]: Oct 09 01:05:10.149 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.158 INFO Fetch successful Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.160 INFO Fetch successful Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.160 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.161 INFO Fetch successful Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.161 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.163 INFO Fetch failed with 404: resource not found Oct 9 01:05:10.165809 coreos-metadata[1944]: Oct 09 01:05:10.163 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 9 01:05:10.167812 coreos-metadata[1944]: Oct 09 01:05:10.167 INFO Fetch successful Oct 9 01:05:10.167911 coreos-metadata[1944]: Oct 09 01:05:10.167 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 9 01:05:10.170991 coreos-metadata[1944]: Oct 09 01:05:10.170 INFO Fetch successful Oct 9 01:05:10.171097 coreos-metadata[1944]: Oct 09 01:05:10.171 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 9 01:05:10.175689 coreos-metadata[1944]: Oct 09 01:05:10.175 INFO Fetch successful Oct 9 01:05:10.175862 coreos-metadata[1944]: Oct 09 01:05:10.175 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 9 01:05:10.179440 coreos-metadata[1944]: Oct 09 01:05:10.179 INFO Fetch successful Oct 9 01:05:10.179554 coreos-metadata[1944]: Oct 09 01:05:10.179 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 9 01:05:10.183301 coreos-metadata[1944]: Oct 09 01:05:10.182 INFO Fetch successful Oct 9 01:05:10.201818 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 9 01:05:10.313279 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 9 01:05:10.313279 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:05:10.313279 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 9 01:05:10.312334 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:05:10.327055 extend-filesystems[1947]: Resized filesystem in /dev/nvme0n1p9 Oct 9 01:05:10.312837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:05:10.339410 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 01:05:10.341117 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:05:10.351875 bash[2023]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:05:10.356524 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:05:10.386340 systemd-logind[1957]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 01:05:10.389884 systemd-logind[1957]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 9 01:05:10.389918 systemd-logind[1957]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:05:10.397428 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1816) Oct 9 01:05:10.397709 systemd-logind[1957]: New seat seat0. Oct 9 01:05:10.401051 systemd[1]: Starting sshkeys.service... Oct 9 01:05:10.411270 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:05:10.417909 systemd-networkd[1817]: eth0: Gained IPv6LL Oct 9 01:05:10.440299 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:05:10.443225 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:05:10.462345 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 9 01:05:10.476196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:10.490856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:05:10.627596 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 01:05:10.644482 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 01:05:10.818104 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:05:10.948490 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 9 01:05:10.948735 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 9 01:05:10.965385 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2001 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 9 01:05:10.968026 amazon-ssm-agent[2033]: Initializing new seelog logger Oct 9 01:05:10.981330 amazon-ssm-agent[2033]: New Seelog Logger Creation Complete Oct 9 01:05:10.981470 amazon-ssm-agent[2033]: 2024/10/09 01:05:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:10.981470 amazon-ssm-agent[2033]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:10.981966 amazon-ssm-agent[2033]: 2024/10/09 01:05:10 processing appconfig overrides Oct 9 01:05:11.001758 systemd[1]: Starting polkit.service - Authorization Manager... Oct 9 01:05:11.017979 amazon-ssm-agent[2033]: 2024/10/09 01:05:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:11.017979 amazon-ssm-agent[2033]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:11.017979 amazon-ssm-agent[2033]: 2024/10/09 01:05:11 processing appconfig overrides Oct 9 01:05:11.017979 amazon-ssm-agent[2033]: 2024/10/09 01:05:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:11.017979 amazon-ssm-agent[2033]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:11.017979 amazon-ssm-agent[2033]: 2024/10/09 01:05:11 processing appconfig overrides Oct 9 01:05:11.029762 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO Proxy environment variables: Oct 9 01:05:11.039807 amazon-ssm-agent[2033]: 2024/10/09 01:05:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:11.041254 amazon-ssm-agent[2033]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 01:05:11.041254 amazon-ssm-agent[2033]: 2024/10/09 01:05:11 processing appconfig overrides Oct 9 01:05:11.094997 coreos-metadata[2038]: Oct 09 01:05:11.094 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 9 01:05:11.096956 coreos-metadata[2038]: Oct 09 01:05:11.096 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 9 01:05:11.104555 coreos-metadata[2038]: Oct 09 01:05:11.101 INFO Fetch successful Oct 9 01:05:11.104555 coreos-metadata[2038]: Oct 09 01:05:11.101 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 9 01:05:11.105235 coreos-metadata[2038]: Oct 09 01:05:11.105 INFO Fetch successful Oct 9 01:05:11.128311 unknown[2038]: wrote ssh authorized keys file for user: core Oct 9 01:05:11.152833 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO https_proxy: Oct 9 01:05:11.165926 polkitd[2097]: Started polkitd version 121 Oct 9 01:05:11.206605 update-ssh-keys[2129]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:05:11.211659 polkitd[2097]: Loading rules from directory /etc/polkit-1/rules.d Oct 9 01:05:11.208078 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 01:05:11.222213 systemd[1]: Finished sshkeys.service. Oct 9 01:05:11.227044 polkitd[2097]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 9 01:05:11.231952 polkitd[2097]: Finished loading, compiling and executing 2 rules Oct 9 01:05:11.234078 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 9 01:05:11.234313 systemd[1]: Started polkit.service - Authorization Manager. Oct 9 01:05:11.237071 locksmithd[1997]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:05:11.241968 polkitd[2097]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 9 01:05:11.263872 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO http_proxy: Oct 9 01:05:11.359628 systemd-hostnamed[2001]: Hostname set to (transient) Oct 9 01:05:11.359773 systemd-resolved[1767]: System hostname changed to 'ip-172-31-16-164'. Oct 9 01:05:11.369277 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO no_proxy: Oct 9 01:05:11.417818 containerd[1978]: time="2024-10-09T01:05:11.408300336Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:05:11.473152 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO Checking if agent identity type OnPrem can be assumed Oct 9 01:05:11.566890 containerd[1978]: time="2024-10-09T01:05:11.566834790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.572518810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.572575166Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.572602187Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.572925389Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.572959191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.573048093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.573067148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.573310745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.573336623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.573361486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:05:11.573949 containerd[1978]: time="2024-10-09T01:05:11.573379558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.574766 containerd[1978]: time="2024-10-09T01:05:11.573496184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.577432 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO Checking if agent identity type EC2 can be assumed Oct 9 01:05:11.577540 containerd[1978]: time="2024-10-09T01:05:11.576057366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:05:11.577540 containerd[1978]: time="2024-10-09T01:05:11.576352707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:05:11.577540 containerd[1978]: time="2024-10-09T01:05:11.576378871Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:05:11.577540 containerd[1978]: time="2024-10-09T01:05:11.576564160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:05:11.577540 containerd[1978]: time="2024-10-09T01:05:11.576657323Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:05:11.586397 containerd[1978]: time="2024-10-09T01:05:11.586316662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:05:11.587031 containerd[1978]: time="2024-10-09T01:05:11.586559966Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:05:11.587031 containerd[1978]: time="2024-10-09T01:05:11.586589605Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:05:11.587031 containerd[1978]: time="2024-10-09T01:05:11.586653575Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:05:11.587031 containerd[1978]: time="2024-10-09T01:05:11.586701832Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:05:11.587031 containerd[1978]: time="2024-10-09T01:05:11.586973484Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.588730134Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.588927079Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.588954365Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.588977735Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.588998864Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589018766Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589038038Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589060605Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589082386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589102737Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589126900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589154198Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589185730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.589955 containerd[1978]: time="2024-10-09T01:05:11.589207559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589227543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589248384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589267331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589286753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589305577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589325211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589345473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589366536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589384561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589404365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589423067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589445389Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589477732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589497355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.590518 containerd[1978]: time="2024-10-09T01:05:11.589514451Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592301944Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592347622Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592369737Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592389530Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592405326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592433727Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592451106Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:05:11.594654 containerd[1978]: time="2024-10-09T01:05:11.592467596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:05:11.595098 containerd[1978]: time="2024-10-09T01:05:11.592924255Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:05:11.595098 containerd[1978]: time="2024-10-09T01:05:11.592990972Z" level=info msg="Connect containerd service" Oct 9 01:05:11.595098 containerd[1978]: time="2024-10-09T01:05:11.593033740Z" level=info msg="using legacy CRI server" Oct 9 01:05:11.595098 containerd[1978]: time="2024-10-09T01:05:11.593044036Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:05:11.595098 containerd[1978]: time="2024-10-09T01:05:11.593187341Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.596978842Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.597447666Z" level=info msg="Start subscribing containerd event" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.597510394Z" level=info msg="Start recovering state" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.597726129Z" level=info msg="Start event monitor" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.597752631Z" level=info msg="Start snapshots syncer" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.597766975Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:05:11.598500 containerd[1978]: time="2024-10-09T01:05:11.597790878Z" level=info msg="Start streaming server" Oct 9 01:05:11.600806 containerd[1978]: time="2024-10-09T01:05:11.599952886Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:05:11.600806 containerd[1978]: time="2024-10-09T01:05:11.600013699Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:05:11.601493 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:05:11.601813 containerd[1978]: time="2024-10-09T01:05:11.601767420Z" level=info msg="containerd successfully booted in 0.202238s" Oct 9 01:05:11.678629 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO Agent will take identity from EC2 Oct 9 01:05:11.773910 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 9 01:05:11.873412 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 9 01:05:11.974831 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 9 01:05:12.079861 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 9 01:05:12.180104 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Oct 9 01:05:12.281940 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] Starting Core Agent Oct 9 01:05:12.382374 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 9 01:05:12.424307 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [Registrar] Starting registrar module Oct 9 01:05:12.424472 amazon-ssm-agent[2033]: 2024-10-09 01:05:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 9 01:05:12.424551 amazon-ssm-agent[2033]: 2024-10-09 01:05:12 INFO [EC2Identity] EC2 registration was successful. Oct 9 01:05:12.424617 amazon-ssm-agent[2033]: 2024-10-09 01:05:12 INFO [CredentialRefresher] credentialRefresher has started Oct 9 01:05:12.424698 amazon-ssm-agent[2033]: 2024-10-09 01:05:12 INFO [CredentialRefresher] Starting credentials refresher loop Oct 9 01:05:12.424830 amazon-ssm-agent[2033]: 2024-10-09 01:05:12 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 9 01:05:12.485499 amazon-ssm-agent[2033]: 2024-10-09 01:05:12 INFO [CredentialRefresher] Next credential rotation will be in 31.308304128583334 minutes Oct 9 01:05:12.577372 tar[1974]: linux-amd64/LICENSE Oct 9 01:05:12.577894 tar[1974]: linux-amd64/README.md Oct 9 01:05:12.604507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:05:12.640297 sshd_keygen[2003]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:05:12.679765 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:05:12.690338 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:05:12.697597 systemd[1]: Started sshd@0-172.31.16.164:22-147.75.109.163:44392.service - OpenSSH per-connection server daemon (147.75.109.163:44392). Oct 9 01:05:12.704139 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:05:12.704642 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:05:12.728023 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:05:12.763614 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:05:12.774341 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:05:12.783941 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:05:12.785826 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:05:12.963809 sshd[2181]: Accepted publickey for core from 147.75.109.163 port 44392 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:12.967257 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:12.989466 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:05:13.000425 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:05:13.007829 systemd-logind[1957]: New session 1 of user core. Oct 9 01:05:13.038476 ntpd[1949]: Listen normally on 6 eth0 [fe80::421:e2ff:fec4:a4cd%2]:123 Oct 9 01:05:13.040016 ntpd[1949]: 9 Oct 01:05:13 ntpd[1949]: Listen normally on 6 eth0 [fe80::421:e2ff:fec4:a4cd%2]:123 Oct 9 01:05:13.051510 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:05:13.070268 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:05:13.075989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:13.083839 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:05:13.087288 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:13.091749 (systemd)[2196]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:05:13.334693 systemd[2196]: Queued start job for default target default.target. Oct 9 01:05:13.342305 systemd[2196]: Created slice app.slice - User Application Slice. Oct 9 01:05:13.342348 systemd[2196]: Reached target paths.target - Paths. Oct 9 01:05:13.342370 systemd[2196]: Reached target timers.target - Timers. Oct 9 01:05:13.347017 systemd[2196]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:05:13.374728 systemd[2196]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:05:13.375870 systemd[2196]: Reached target sockets.target - Sockets. Oct 9 01:05:13.376017 systemd[2196]: Reached target basic.target - Basic System. Oct 9 01:05:13.376082 systemd[2196]: Reached target default.target - Main User Target. Oct 9 01:05:13.376121 systemd[2196]: Startup finished in 267ms. Oct 9 01:05:13.376464 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:05:13.390003 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:05:13.392405 systemd[1]: Startup finished in 810ms (kernel) + 8.905s (initrd) + 8.757s (userspace) = 18.473s. Oct 9 01:05:13.477459 amazon-ssm-agent[2033]: 2024-10-09 01:05:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 9 01:05:13.579892 amazon-ssm-agent[2033]: 2024-10-09 01:05:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2215) started Oct 9 01:05:13.641255 systemd[1]: Started sshd@1-172.31.16.164:22-147.75.109.163:47830.service - OpenSSH per-connection server daemon (147.75.109.163:47830). Oct 9 01:05:13.680908 amazon-ssm-agent[2033]: 2024-10-09 01:05:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 9 01:05:13.859454 sshd[2224]: Accepted publickey for core from 147.75.109.163 port 47830 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:13.861543 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:13.869068 systemd-logind[1957]: New session 2 of user core. Oct 9 01:05:13.875007 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:05:14.003460 sshd[2224]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:14.008118 systemd-logind[1957]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:05:14.009434 systemd[1]: sshd@1-172.31.16.164:22-147.75.109.163:47830.service: Deactivated successfully. Oct 9 01:05:14.013525 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:05:14.017632 systemd-logind[1957]: Removed session 2. Oct 9 01:05:14.043041 systemd[1]: Started sshd@2-172.31.16.164:22-147.75.109.163:47840.service - OpenSSH per-connection server daemon (147.75.109.163:47840). Oct 9 01:05:14.138910 kubelet[2197]: E1009 01:05:14.138868 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:14.141884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:14.142082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:14.142430 systemd[1]: kubelet.service: Consumed 1.156s CPU time. Oct 9 01:05:14.201897 sshd[2239]: Accepted publickey for core from 147.75.109.163 port 47840 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:14.203329 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:14.209928 systemd-logind[1957]: New session 3 of user core. Oct 9 01:05:14.216007 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:05:14.330283 sshd[2239]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:14.335521 systemd[1]: sshd@2-172.31.16.164:22-147.75.109.163:47840.service: Deactivated successfully. Oct 9 01:05:14.338518 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:05:14.339599 systemd-logind[1957]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:05:14.340756 systemd-logind[1957]: Removed session 3. Oct 9 01:05:14.370452 systemd[1]: Started sshd@3-172.31.16.164:22-147.75.109.163:47856.service - OpenSSH per-connection server daemon (147.75.109.163:47856). Oct 9 01:05:14.528510 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 47856 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:14.530418 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:14.544892 systemd-logind[1957]: New session 4 of user core. Oct 9 01:05:14.551147 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:05:14.685407 sshd[2248]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:14.690485 systemd[1]: sshd@3-172.31.16.164:22-147.75.109.163:47856.service: Deactivated successfully. Oct 9 01:05:14.695436 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:05:14.699914 systemd-logind[1957]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:05:14.701505 systemd-logind[1957]: Removed session 4. Oct 9 01:05:14.733381 systemd[1]: Started sshd@4-172.31.16.164:22-147.75.109.163:47860.service - OpenSSH per-connection server daemon (147.75.109.163:47860). Oct 9 01:05:14.896811 sshd[2255]: Accepted publickey for core from 147.75.109.163 port 47860 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:14.898409 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:14.903037 systemd-logind[1957]: New session 5 of user core. Oct 9 01:05:14.914032 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:05:15.031660 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:05:15.033868 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:05:15.056505 sudo[2258]: pam_unix(sudo:session): session closed for user root Oct 9 01:05:15.080087 sshd[2255]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:15.084802 systemd[1]: sshd@4-172.31.16.164:22-147.75.109.163:47860.service: Deactivated successfully. Oct 9 01:05:15.087219 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:05:15.089428 systemd-logind[1957]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:05:15.091337 systemd-logind[1957]: Removed session 5. Oct 9 01:05:15.118188 systemd[1]: Started sshd@5-172.31.16.164:22-147.75.109.163:47874.service - OpenSSH per-connection server daemon (147.75.109.163:47874). Oct 9 01:05:15.281050 sshd[2263]: Accepted publickey for core from 147.75.109.163 port 47874 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:15.283910 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:15.290970 systemd-logind[1957]: New session 6 of user core. Oct 9 01:05:15.301026 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:05:15.400514 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:05:15.401597 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:05:15.407702 sudo[2267]: pam_unix(sudo:session): session closed for user root Oct 9 01:05:15.420963 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:05:15.421520 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:05:15.481621 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:05:15.564311 augenrules[2289]: No rules Oct 9 01:05:15.566668 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:05:15.566932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:05:15.570373 sudo[2266]: pam_unix(sudo:session): session closed for user root Oct 9 01:05:15.596564 sshd[2263]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:15.602576 systemd[1]: sshd@5-172.31.16.164:22-147.75.109.163:47874.service: Deactivated successfully. Oct 9 01:05:15.606298 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:05:15.608111 systemd-logind[1957]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:05:15.609413 systemd-logind[1957]: Removed session 6. Oct 9 01:05:15.633523 systemd[1]: Started sshd@6-172.31.16.164:22-147.75.109.163:47888.service - OpenSSH per-connection server daemon (147.75.109.163:47888). Oct 9 01:05:15.808089 sshd[2297]: Accepted publickey for core from 147.75.109.163 port 47888 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:05:15.810213 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:15.819389 systemd-logind[1957]: New session 7 of user core. Oct 9 01:05:15.823115 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:05:15.926305 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:05:15.926736 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:05:16.534624 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:05:16.549034 (dockerd)[2318]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:05:17.029989 dockerd[2318]: time="2024-10-09T01:05:17.029643076Z" level=info msg="Starting up" Oct 9 01:05:17.560383 systemd-resolved[1767]: Clock change detected. Flushing caches. Oct 9 01:05:17.905876 dockerd[2318]: time="2024-10-09T01:05:17.905092407Z" level=info msg="Loading containers: start." Oct 9 01:05:18.166873 kernel: Initializing XFRM netlink socket Oct 9 01:05:18.206342 (udev-worker)[2342]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:05:18.291137 systemd-networkd[1817]: docker0: Link UP Oct 9 01:05:18.326805 dockerd[2318]: time="2024-10-09T01:05:18.326752586Z" level=info msg="Loading containers: done." Oct 9 01:05:18.376735 dockerd[2318]: time="2024-10-09T01:05:18.376673425Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:05:18.376964 dockerd[2318]: time="2024-10-09T01:05:18.376797884Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:05:18.377028 dockerd[2318]: time="2024-10-09T01:05:18.376967060Z" level=info msg="Daemon has completed initialization" Oct 9 01:05:18.426489 dockerd[2318]: time="2024-10-09T01:05:18.426414846Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:05:18.428985 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:05:19.517409 containerd[1978]: time="2024-10-09T01:05:19.517151006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:05:20.198311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501289975.mount: Deactivated successfully. Oct 9 01:05:23.550977 containerd[1978]: time="2024-10-09T01:05:23.550924212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:23.552391 containerd[1978]: time="2024-10-09T01:05:23.552205901Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 9 01:05:23.554625 containerd[1978]: time="2024-10-09T01:05:23.554057006Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:23.557729 containerd[1978]: time="2024-10-09T01:05:23.557689162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:23.559447 containerd[1978]: time="2024-10-09T01:05:23.558870887Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 4.041681396s" Oct 9 01:05:23.559447 containerd[1978]: time="2024-10-09T01:05:23.558916744Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 9 01:05:23.586311 containerd[1978]: time="2024-10-09T01:05:23.586266363Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:05:24.914686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:05:24.921098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:25.387159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:25.398921 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:25.467458 kubelet[2581]: E1009 01:05:25.467403 2581 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:25.472937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:25.473298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:27.944216 containerd[1978]: time="2024-10-09T01:05:27.944162982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:27.965472 containerd[1978]: time="2024-10-09T01:05:27.965190272Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 9 01:05:27.990411 containerd[1978]: time="2024-10-09T01:05:27.988858749Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:28.019157 containerd[1978]: time="2024-10-09T01:05:28.019102507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:28.021070 containerd[1978]: time="2024-10-09T01:05:28.021022882Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 4.434715328s" Oct 9 01:05:28.021070 containerd[1978]: time="2024-10-09T01:05:28.021073372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 9 01:05:28.057461 containerd[1978]: time="2024-10-09T01:05:28.057422368Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:05:31.298180 containerd[1978]: time="2024-10-09T01:05:31.298124481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:31.300065 containerd[1978]: time="2024-10-09T01:05:31.299848101Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 9 01:05:31.302608 containerd[1978]: time="2024-10-09T01:05:31.301723941Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:31.308463 containerd[1978]: time="2024-10-09T01:05:31.308277996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:31.309786 containerd[1978]: time="2024-10-09T01:05:31.309610704Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 3.252137095s" Oct 9 01:05:31.309786 containerd[1978]: time="2024-10-09T01:05:31.309654897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 9 01:05:31.370600 containerd[1978]: time="2024-10-09T01:05:31.370557581Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:05:32.888719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972154577.mount: Deactivated successfully. Oct 9 01:05:33.804618 containerd[1978]: time="2024-10-09T01:05:33.804561672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:33.806018 containerd[1978]: time="2024-10-09T01:05:33.805850582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 9 01:05:33.807849 containerd[1978]: time="2024-10-09T01:05:33.807489043Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:33.812067 containerd[1978]: time="2024-10-09T01:05:33.811019630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:33.812067 containerd[1978]: time="2024-10-09T01:05:33.811767198Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 2.441163414s" Oct 9 01:05:33.812067 containerd[1978]: time="2024-10-09T01:05:33.811927567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 9 01:05:33.843322 containerd[1978]: time="2024-10-09T01:05:33.843285385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:05:34.505261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023645884.mount: Deactivated successfully. Oct 9 01:05:35.582032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:05:35.591348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:36.342125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:36.356463 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:36.474610 kubelet[2671]: E1009 01:05:36.474573 2671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:36.478025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:36.478222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:36.485249 containerd[1978]: time="2024-10-09T01:05:36.485190927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:36.492232 containerd[1978]: time="2024-10-09T01:05:36.492154596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 01:05:36.499271 containerd[1978]: time="2024-10-09T01:05:36.498471083Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:36.510872 containerd[1978]: time="2024-10-09T01:05:36.509014812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:36.512462 containerd[1978]: time="2024-10-09T01:05:36.512374013Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.669041279s" Oct 9 01:05:36.512462 containerd[1978]: time="2024-10-09T01:05:36.512424330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:05:36.551114 containerd[1978]: time="2024-10-09T01:05:36.551045104Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:05:37.083429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683006298.mount: Deactivated successfully. Oct 9 01:05:37.097257 containerd[1978]: time="2024-10-09T01:05:37.097197057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:37.098580 containerd[1978]: time="2024-10-09T01:05:37.098408159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 01:05:37.100675 containerd[1978]: time="2024-10-09T01:05:37.100633431Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:37.103535 containerd[1978]: time="2024-10-09T01:05:37.103369473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:37.104892 containerd[1978]: time="2024-10-09T01:05:37.104857016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 553.775047ms" Oct 9 01:05:37.104892 containerd[1978]: time="2024-10-09T01:05:37.104892762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:05:37.137966 containerd[1978]: time="2024-10-09T01:05:37.137650175Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:05:37.689776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650990530.mount: Deactivated successfully. Oct 9 01:05:41.483427 containerd[1978]: time="2024-10-09T01:05:41.483299374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:41.489107 containerd[1978]: time="2024-10-09T01:05:41.489004765Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 9 01:05:41.504575 containerd[1978]: time="2024-10-09T01:05:41.504504990Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:41.510713 containerd[1978]: time="2024-10-09T01:05:41.510634806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:41.512474 containerd[1978]: time="2024-10-09T01:05:41.512291482Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.374594678s" Oct 9 01:05:41.512474 containerd[1978]: time="2024-10-09T01:05:41.512338765Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 9 01:05:41.903104 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 9 01:05:45.072574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:45.081278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:45.113498 systemd[1]: Reloading requested from client PID 2808 ('systemctl') (unit session-7.scope)... Oct 9 01:05:45.113522 systemd[1]: Reloading... Oct 9 01:05:45.235498 zram_generator::config[2844]: No configuration found. Oct 9 01:05:45.532611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:05:45.666315 systemd[1]: Reloading finished in 552 ms. Oct 9 01:05:45.772055 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:05:45.772167 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:05:45.772459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:45.786388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:46.202025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:46.221160 (kubelet)[2908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:05:46.328009 kubelet[2908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:05:46.328009 kubelet[2908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:05:46.328009 kubelet[2908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:05:46.328455 kubelet[2908]: I1009 01:05:46.328070 2908 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:05:46.591975 kubelet[2908]: I1009 01:05:46.591932 2908 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:05:46.591975 kubelet[2908]: I1009 01:05:46.591964 2908 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:05:46.592276 kubelet[2908]: I1009 01:05:46.592253 2908 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:05:46.628291 kubelet[2908]: I1009 01:05:46.627835 2908 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:05:46.629423 kubelet[2908]: E1009 01:05:46.629247 2908 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.164:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.658481 kubelet[2908]: I1009 01:05:46.658038 2908 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:05:46.668837 kubelet[2908]: I1009 01:05:46.668748 2908 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:05:46.671580 kubelet[2908]: I1009 01:05:46.668836 2908 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-164","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:05:46.674939 kubelet[2908]: I1009 01:05:46.674488 2908 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:05:46.674939 kubelet[2908]: I1009 01:05:46.674527 2908 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:05:46.674939 kubelet[2908]: I1009 01:05:46.674881 2908 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:05:46.677450 kubelet[2908]: I1009 01:05:46.677067 2908 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:05:46.677450 kubelet[2908]: I1009 01:05:46.677099 2908 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:05:46.677450 kubelet[2908]: I1009 01:05:46.677132 2908 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:05:46.677450 kubelet[2908]: I1009 01:05:46.677157 2908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:05:46.687855 kubelet[2908]: W1009 01:05:46.687574 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.687855 kubelet[2908]: E1009 01:05:46.687645 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.687855 kubelet[2908]: W1009 01:05:46.687734 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-164&limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.687855 kubelet[2908]: E1009 01:05:46.687775 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-164&limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.691343 kubelet[2908]: I1009 01:05:46.688406 2908 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:05:46.693035 kubelet[2908]: I1009 01:05:46.691732 2908 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:05:46.693035 kubelet[2908]: W1009 01:05:46.691793 2908 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:05:46.693222 kubelet[2908]: I1009 01:05:46.693044 2908 server.go:1264] "Started kubelet" Oct 9 01:05:46.700343 kubelet[2908]: I1009 01:05:46.700265 2908 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:05:46.702295 kubelet[2908]: I1009 01:05:46.702233 2908 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:05:46.702763 kubelet[2908]: I1009 01:05:46.702730 2908 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:05:46.703180 kubelet[2908]: E1009 01:05:46.702933 2908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.164:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.164:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-164.17fca355fba72e99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-164,UID:ip-172-31-16-164,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-164,},FirstTimestamp:2024-10-09 01:05:46.693013145 +0000 UTC m=+0.465385281,LastTimestamp:2024-10-09 01:05:46.693013145 +0000 UTC m=+0.465385281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-164,}" Oct 9 01:05:46.704798 kubelet[2908]: I1009 01:05:46.704202 2908 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:05:46.709584 kubelet[2908]: I1009 01:05:46.708999 2908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:05:46.716026 kubelet[2908]: I1009 01:05:46.715981 2908 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:05:46.719130 kubelet[2908]: I1009 01:05:46.718811 2908 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:05:46.719130 kubelet[2908]: I1009 01:05:46.718930 2908 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:05:46.721961 kubelet[2908]: W1009 01:05:46.721879 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.722943 kubelet[2908]: E1009 01:05:46.722703 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.724643 kubelet[2908]: E1009 01:05:46.724619 2908 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:05:46.725283 kubelet[2908]: I1009 01:05:46.725099 2908 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:05:46.725283 kubelet[2908]: E1009 01:05:46.724985 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-164?timeout=10s\": dial tcp 172.31.16.164:6443: connect: connection refused" interval="200ms" Oct 9 01:05:46.731385 kubelet[2908]: I1009 01:05:46.731357 2908 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:05:46.731385 kubelet[2908]: I1009 01:05:46.731381 2908 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:05:46.771764 kubelet[2908]: I1009 01:05:46.771393 2908 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:05:46.771764 kubelet[2908]: I1009 01:05:46.771417 2908 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:05:46.771764 kubelet[2908]: I1009 01:05:46.771438 2908 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:05:46.775101 kubelet[2908]: I1009 01:05:46.775027 2908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:05:46.788501 kubelet[2908]: I1009 01:05:46.776760 2908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:05:46.788501 kubelet[2908]: I1009 01:05:46.776786 2908 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:05:46.788501 kubelet[2908]: I1009 01:05:46.776816 2908 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:05:46.788501 kubelet[2908]: E1009 01:05:46.776956 2908 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:05:46.788501 kubelet[2908]: W1009 01:05:46.778404 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.788501 kubelet[2908]: E1009 01:05:46.780675 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:46.794837 kubelet[2908]: I1009 01:05:46.794778 2908 policy_none.go:49] "None policy: Start" Oct 9 01:05:46.795716 kubelet[2908]: I1009 01:05:46.795658 2908 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:05:46.795849 kubelet[2908]: I1009 01:05:46.795742 2908 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:05:46.818751 kubelet[2908]: I1009 01:05:46.818713 2908 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-164" Oct 9 01:05:46.822478 kubelet[2908]: E1009 01:05:46.819139 2908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.164:6443/api/v1/nodes\": dial tcp 172.31.16.164:6443: connect: connection refused" node="ip-172-31-16-164" Oct 9 01:05:46.836482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:05:46.848035 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:05:46.854810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:05:46.866014 kubelet[2908]: I1009 01:05:46.865986 2908 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:05:46.866787 kubelet[2908]: I1009 01:05:46.866473 2908 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:05:46.866787 kubelet[2908]: I1009 01:05:46.866703 2908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:05:46.869397 kubelet[2908]: E1009 01:05:46.869332 2908 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-164\" not found" Oct 9 01:05:46.882904 kubelet[2908]: I1009 01:05:46.882495 2908 topology_manager.go:215] "Topology Admit Handler" podUID="4f5208d137b104fa8e907099268a66c1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-164" Oct 9 01:05:46.891938 kubelet[2908]: I1009 01:05:46.891898 2908 topology_manager.go:215] "Topology Admit Handler" podUID="b20af47f273636c384e3110e68c18550" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:46.897240 kubelet[2908]: I1009 01:05:46.897179 2908 topology_manager.go:215] "Topology Admit Handler" podUID="ad81eabbd0f6a8ca780b0aeada2907ea" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-164" Oct 9 01:05:46.910911 systemd[1]: Created slice kubepods-burstable-pod4f5208d137b104fa8e907099268a66c1.slice - libcontainer container kubepods-burstable-pod4f5208d137b104fa8e907099268a66c1.slice. Oct 9 01:05:46.926281 kubelet[2908]: E1009 01:05:46.926241 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-164?timeout=10s\": dial tcp 172.31.16.164:6443: connect: connection refused" interval="400ms" Oct 9 01:05:46.930245 systemd[1]: Created slice kubepods-burstable-podb20af47f273636c384e3110e68c18550.slice - libcontainer container kubepods-burstable-podb20af47f273636c384e3110e68c18550.slice. Oct 9 01:05:46.945762 systemd[1]: Created slice kubepods-burstable-podad81eabbd0f6a8ca780b0aeada2907ea.slice - libcontainer container kubepods-burstable-podad81eabbd0f6a8ca780b0aeada2907ea.slice. Oct 9 01:05:47.020547 kubelet[2908]: I1009 01:05:47.020086 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad81eabbd0f6a8ca780b0aeada2907ea-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-164\" (UID: \"ad81eabbd0f6a8ca780b0aeada2907ea\") " pod="kube-system/kube-scheduler-ip-172-31-16-164" Oct 9 01:05:47.020547 kubelet[2908]: I1009 01:05:47.020134 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f5208d137b104fa8e907099268a66c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-164\" (UID: \"4f5208d137b104fa8e907099268a66c1\") " pod="kube-system/kube-apiserver-ip-172-31-16-164" Oct 9 01:05:47.020547 kubelet[2908]: I1009 01:05:47.020161 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:47.020547 kubelet[2908]: I1009 01:05:47.020184 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:47.020547 kubelet[2908]: I1009 01:05:47.020212 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:47.021016 kubelet[2908]: I1009 01:05:47.020237 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:47.021016 kubelet[2908]: I1009 01:05:47.020259 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:47.021016 kubelet[2908]: I1009 01:05:47.020374 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f5208d137b104fa8e907099268a66c1-ca-certs\") pod \"kube-apiserver-ip-172-31-16-164\" (UID: \"4f5208d137b104fa8e907099268a66c1\") " pod="kube-system/kube-apiserver-ip-172-31-16-164" Oct 9 01:05:47.021016 kubelet[2908]: I1009 01:05:47.020424 2908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f5208d137b104fa8e907099268a66c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-164\" (UID: \"4f5208d137b104fa8e907099268a66c1\") " pod="kube-system/kube-apiserver-ip-172-31-16-164" Oct 9 01:05:47.021913 kubelet[2908]: I1009 01:05:47.021879 2908 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-164" Oct 9 01:05:47.022263 kubelet[2908]: E1009 01:05:47.022227 2908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.164:6443/api/v1/nodes\": dial tcp 172.31.16.164:6443: connect: connection refused" node="ip-172-31-16-164" Oct 9 01:05:47.227257 containerd[1978]: time="2024-10-09T01:05:47.227129751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-164,Uid:4f5208d137b104fa8e907099268a66c1,Namespace:kube-system,Attempt:0,}" Oct 9 01:05:47.233959 containerd[1978]: time="2024-10-09T01:05:47.233885617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-164,Uid:b20af47f273636c384e3110e68c18550,Namespace:kube-system,Attempt:0,}" Oct 9 01:05:47.251587 containerd[1978]: time="2024-10-09T01:05:47.250899406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-164,Uid:ad81eabbd0f6a8ca780b0aeada2907ea,Namespace:kube-system,Attempt:0,}" Oct 9 01:05:47.327360 kubelet[2908]: E1009 01:05:47.327310 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-164?timeout=10s\": dial tcp 172.31.16.164:6443: connect: connection refused" interval="800ms" Oct 9 01:05:47.424599 kubelet[2908]: I1009 01:05:47.424569 2908 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-164" Oct 9 01:05:47.425165 kubelet[2908]: E1009 01:05:47.425018 2908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.164:6443/api/v1/nodes\": dial tcp 172.31.16.164:6443: connect: connection refused" node="ip-172-31-16-164" Oct 9 01:05:47.618457 kubelet[2908]: W1009 01:05:47.618392 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:47.618457 kubelet[2908]: E1009 01:05:47.618458 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:47.654478 kubelet[2908]: W1009 01:05:47.654411 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:47.654478 kubelet[2908]: E1009 01:05:47.654481 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:47.767281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634732594.mount: Deactivated successfully. Oct 9 01:05:47.786126 containerd[1978]: time="2024-10-09T01:05:47.786074374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:05:47.788219 containerd[1978]: time="2024-10-09T01:05:47.788181545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:05:47.789667 containerd[1978]: time="2024-10-09T01:05:47.789614280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:05:47.791138 containerd[1978]: time="2024-10-09T01:05:47.791084256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 01:05:47.798307 containerd[1978]: time="2024-10-09T01:05:47.793614671Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:05:47.798990 containerd[1978]: time="2024-10-09T01:05:47.798613322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:05:47.818902 containerd[1978]: time="2024-10-09T01:05:47.818845566Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:05:47.824067 kubelet[2908]: W1009 01:05:47.823979 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-164&limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:47.824353 kubelet[2908]: E1009 01:05:47.824309 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-164&limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:47.825421 containerd[1978]: time="2024-10-09T01:05:47.825379404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.06396ms" Oct 9 01:05:47.826928 containerd[1978]: time="2024-10-09T01:05:47.826812410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:05:47.830533 containerd[1978]: time="2024-10-09T01:05:47.829064622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.925701ms" Oct 9 01:05:47.831958 containerd[1978]: time="2024-10-09T01:05:47.831326523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.102251ms" Oct 9 01:05:48.019811 kubelet[2908]: W1009 01:05:48.019394 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:48.019811 kubelet[2908]: E1009 01:05:48.019452 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:48.129204 kubelet[2908]: E1009 01:05:48.129061 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-164?timeout=10s\": dial tcp 172.31.16.164:6443: connect: connection refused" interval="1.6s" Oct 9 01:05:48.139629 containerd[1978]: time="2024-10-09T01:05:48.139465703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:48.139629 containerd[1978]: time="2024-10-09T01:05:48.139537202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:48.139629 containerd[1978]: time="2024-10-09T01:05:48.139562023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:48.140150 containerd[1978]: time="2024-10-09T01:05:48.139676551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:48.140478 containerd[1978]: time="2024-10-09T01:05:48.135086319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:48.140478 containerd[1978]: time="2024-10-09T01:05:48.140164379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:48.140478 containerd[1978]: time="2024-10-09T01:05:48.140197807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:48.143089 containerd[1978]: time="2024-10-09T01:05:48.142948553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:48.143089 containerd[1978]: time="2024-10-09T01:05:48.143013196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:48.143089 containerd[1978]: time="2024-10-09T01:05:48.143038811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:48.143418 containerd[1978]: time="2024-10-09T01:05:48.143201395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:48.145584 containerd[1978]: time="2024-10-09T01:05:48.145068147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:48.194205 systemd[1]: Started cri-containerd-b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e.scope - libcontainer container b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e. Oct 9 01:05:48.208767 systemd[1]: Started cri-containerd-a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d.scope - libcontainer container a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d. Oct 9 01:05:48.213364 systemd[1]: Started cri-containerd-e60a6459513affe540bec62cafddf4c99acf8d830759311eb24eac0354d5da27.scope - libcontainer container e60a6459513affe540bec62cafddf4c99acf8d830759311eb24eac0354d5da27. Oct 9 01:05:48.227120 kubelet[2908]: I1009 01:05:48.227084 2908 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-164" Oct 9 01:05:48.227885 kubelet[2908]: E1009 01:05:48.227847 2908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.164:6443/api/v1/nodes\": dial tcp 172.31.16.164:6443: connect: connection refused" node="ip-172-31-16-164" Oct 9 01:05:48.352842 containerd[1978]: time="2024-10-09T01:05:48.352770279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-164,Uid:4f5208d137b104fa8e907099268a66c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e60a6459513affe540bec62cafddf4c99acf8d830759311eb24eac0354d5da27\"" Oct 9 01:05:48.375646 containerd[1978]: time="2024-10-09T01:05:48.375488593Z" level=info msg="CreateContainer within sandbox \"e60a6459513affe540bec62cafddf4c99acf8d830759311eb24eac0354d5da27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:05:48.397708 containerd[1978]: time="2024-10-09T01:05:48.397541644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-164,Uid:b20af47f273636c384e3110e68c18550,Namespace:kube-system,Attempt:0,} returns sandbox id \"b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e\"" Oct 9 01:05:48.398801 containerd[1978]: time="2024-10-09T01:05:48.398738946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-164,Uid:ad81eabbd0f6a8ca780b0aeada2907ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d\"" Oct 9 01:05:48.402080 containerd[1978]: time="2024-10-09T01:05:48.401882232Z" level=info msg="CreateContainer within sandbox \"e60a6459513affe540bec62cafddf4c99acf8d830759311eb24eac0354d5da27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b2747a33cdac3e12340209e52f9e02d2c1a098cd4e27ac187149a05da3948d37\"" Oct 9 01:05:48.402601 containerd[1978]: time="2024-10-09T01:05:48.402570687Z" level=info msg="StartContainer for \"b2747a33cdac3e12340209e52f9e02d2c1a098cd4e27ac187149a05da3948d37\"" Oct 9 01:05:48.405180 containerd[1978]: time="2024-10-09T01:05:48.405145628Z" level=info msg="CreateContainer within sandbox \"b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:05:48.408909 containerd[1978]: time="2024-10-09T01:05:48.408876418Z" level=info msg="CreateContainer within sandbox \"a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:05:48.449186 containerd[1978]: time="2024-10-09T01:05:48.449144593Z" level=info msg="CreateContainer within sandbox \"b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164\"" Oct 9 01:05:48.450541 containerd[1978]: time="2024-10-09T01:05:48.450506935Z" level=info msg="StartContainer for \"41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164\"" Oct 9 01:05:48.451373 systemd[1]: Started cri-containerd-b2747a33cdac3e12340209e52f9e02d2c1a098cd4e27ac187149a05da3948d37.scope - libcontainer container b2747a33cdac3e12340209e52f9e02d2c1a098cd4e27ac187149a05da3948d37. Oct 9 01:05:48.453440 containerd[1978]: time="2024-10-09T01:05:48.453314262Z" level=info msg="CreateContainer within sandbox \"a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57\"" Oct 9 01:05:48.453902 containerd[1978]: time="2024-10-09T01:05:48.453873731Z" level=info msg="StartContainer for \"8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57\"" Oct 9 01:05:48.527182 systemd[1]: Started cri-containerd-8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57.scope - libcontainer container 8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57. Oct 9 01:05:48.544193 systemd[1]: Started cri-containerd-41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164.scope - libcontainer container 41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164. Oct 9 01:05:48.567254 containerd[1978]: time="2024-10-09T01:05:48.567194281Z" level=info msg="StartContainer for \"b2747a33cdac3e12340209e52f9e02d2c1a098cd4e27ac187149a05da3948d37\" returns successfully" Oct 9 01:05:48.650877 containerd[1978]: time="2024-10-09T01:05:48.650731297Z" level=info msg="StartContainer for \"41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164\" returns successfully" Oct 9 01:05:48.652249 kubelet[2908]: E1009 01:05:48.651787 2908 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.164:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:48.674653 containerd[1978]: time="2024-10-09T01:05:48.674605845Z" level=info msg="StartContainer for \"8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57\" returns successfully" Oct 9 01:05:49.298522 kubelet[2908]: W1009 01:05:49.298374 2908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:49.298522 kubelet[2908]: E1009 01:05:49.298465 2908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.164:6443: connect: connection refused Oct 9 01:05:49.833063 kubelet[2908]: I1009 01:05:49.832875 2908 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-164" Oct 9 01:05:51.982664 kubelet[2908]: E1009 01:05:51.982624 2908 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-164\" not found" node="ip-172-31-16-164" Oct 9 01:05:52.013011 kubelet[2908]: I1009 01:05:52.012640 2908 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-164" Oct 9 01:05:52.692458 kubelet[2908]: I1009 01:05:52.692419 2908 apiserver.go:52] "Watching apiserver" Oct 9 01:05:52.720196 kubelet[2908]: I1009 01:05:52.720151 2908 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:05:54.199605 systemd[1]: Reloading requested from client PID 3189 ('systemctl') (unit session-7.scope)... Oct 9 01:05:54.199626 systemd[1]: Reloading... Oct 9 01:05:54.401900 zram_generator::config[3229]: No configuration found. Oct 9 01:05:54.655872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:05:54.817696 systemd[1]: Reloading finished in 617 ms. Oct 9 01:05:54.876042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:54.890813 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:05:54.891811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:54.914234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:55.381358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:55.404390 (kubelet)[3286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:05:55.444853 update_engine[1958]: I20241009 01:05:55.442030 1958 update_attempter.cc:509] Updating boot flags... Oct 9 01:05:55.550849 kubelet[3286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:05:55.550849 kubelet[3286]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:05:55.550849 kubelet[3286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:05:55.551286 kubelet[3286]: I1009 01:05:55.550968 3286 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:05:55.588854 kubelet[3286]: I1009 01:05:55.587814 3286 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:05:55.588854 kubelet[3286]: I1009 01:05:55.588047 3286 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:05:55.589788 kubelet[3286]: I1009 01:05:55.589750 3286 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:05:55.598892 kubelet[3286]: I1009 01:05:55.598858 3286 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:05:55.605055 kubelet[3286]: I1009 01:05:55.605002 3286 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:05:55.630849 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3309) Oct 9 01:05:55.632179 kubelet[3286]: I1009 01:05:55.632083 3286 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:05:55.633695 kubelet[3286]: I1009 01:05:55.633014 3286 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:05:55.633695 kubelet[3286]: I1009 01:05:55.633056 3286 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-164","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:05:55.633695 kubelet[3286]: I1009 01:05:55.633464 3286 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:05:55.633695 kubelet[3286]: I1009 01:05:55.633482 3286 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:05:55.633695 kubelet[3286]: I1009 01:05:55.633536 3286 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:05:55.638311 kubelet[3286]: I1009 01:05:55.636886 3286 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:05:55.638311 kubelet[3286]: I1009 01:05:55.636925 3286 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:05:55.638311 kubelet[3286]: I1009 01:05:55.636958 3286 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:05:55.638311 kubelet[3286]: I1009 01:05:55.636976 3286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:05:55.664325 kubelet[3286]: I1009 01:05:55.664291 3286 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:05:55.673926 kubelet[3286]: I1009 01:05:55.673592 3286 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:05:55.681287 kubelet[3286]: I1009 01:05:55.681225 3286 server.go:1264] "Started kubelet" Oct 9 01:05:55.693036 kubelet[3286]: I1009 01:05:55.692866 3286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:05:55.706861 kubelet[3286]: I1009 01:05:55.704917 3286 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:05:55.710498 kubelet[3286]: I1009 01:05:55.710418 3286 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:05:55.723399 kubelet[3286]: I1009 01:05:55.723170 3286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:05:55.727169 kubelet[3286]: I1009 01:05:55.726605 3286 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:05:55.740510 kubelet[3286]: I1009 01:05:55.739713 3286 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:05:55.757169 kubelet[3286]: I1009 01:05:55.757134 3286 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:05:55.762278 kubelet[3286]: I1009 01:05:55.762217 3286 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:05:55.766556 kubelet[3286]: I1009 01:05:55.766111 3286 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:05:55.773737 kubelet[3286]: I1009 01:05:55.773535 3286 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:05:55.788707 kubelet[3286]: I1009 01:05:55.788597 3286 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:05:55.860549 kubelet[3286]: I1009 01:05:55.860347 3286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:05:55.914602 kubelet[3286]: I1009 01:05:55.911353 3286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:05:55.914602 kubelet[3286]: I1009 01:05:55.913033 3286 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:05:55.937708 kubelet[3286]: I1009 01:05:55.937649 3286 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:05:55.938028 kubelet[3286]: E1009 01:05:55.937944 3286 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:05:55.964060 kubelet[3286]: I1009 01:05:55.963340 3286 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-164" Oct 9 01:05:56.056286 kubelet[3286]: E1009 01:05:56.055943 3286 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:05:56.114647 kubelet[3286]: I1009 01:05:56.114516 3286 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-164" Oct 9 01:05:56.114647 kubelet[3286]: I1009 01:05:56.114626 3286 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-164" Oct 9 01:05:56.248515 kubelet[3286]: I1009 01:05:56.247713 3286 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:05:56.248515 kubelet[3286]: I1009 01:05:56.247733 3286 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:05:56.248515 kubelet[3286]: I1009 01:05:56.247755 3286 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:05:56.248515 kubelet[3286]: I1009 01:05:56.247983 3286 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:05:56.248515 kubelet[3286]: I1009 01:05:56.247996 3286 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:05:56.248515 kubelet[3286]: I1009 01:05:56.248025 3286 policy_none.go:49] "None policy: Start" Oct 9 01:05:56.255609 kubelet[3286]: I1009 01:05:56.251994 3286 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:05:56.255609 kubelet[3286]: I1009 01:05:56.252027 3286 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:05:56.255609 kubelet[3286]: I1009 01:05:56.252283 3286 state_mem.go:75] "Updated machine memory state" Oct 9 01:05:56.256154 kubelet[3286]: E1009 01:05:56.256043 3286 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:05:56.264991 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3298) Oct 9 01:05:56.274305 kubelet[3286]: I1009 01:05:56.274272 3286 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:05:56.277708 kubelet[3286]: I1009 01:05:56.277222 3286 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:05:56.277708 kubelet[3286]: I1009 01:05:56.277584 3286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:05:56.659863 kubelet[3286]: I1009 01:05:56.659794 3286 apiserver.go:52] "Watching apiserver" Oct 9 01:05:56.663290 kubelet[3286]: I1009 01:05:56.659867 3286 topology_manager.go:215] "Topology Admit Handler" podUID="4f5208d137b104fa8e907099268a66c1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-164" Oct 9 01:05:56.663424 kubelet[3286]: I1009 01:05:56.663386 3286 topology_manager.go:215] "Topology Admit Handler" podUID="b20af47f273636c384e3110e68c18550" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:56.663499 kubelet[3286]: I1009 01:05:56.663476 3286 topology_manager.go:215] "Topology Admit Handler" podUID="ad81eabbd0f6a8ca780b0aeada2907ea" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-164" Oct 9 01:05:56.734852 kubelet[3286]: I1009 01:05:56.731374 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f5208d137b104fa8e907099268a66c1-ca-certs\") pod \"kube-apiserver-ip-172-31-16-164\" (UID: \"4f5208d137b104fa8e907099268a66c1\") " pod="kube-system/kube-apiserver-ip-172-31-16-164" Oct 9 01:05:56.734852 kubelet[3286]: I1009 01:05:56.731440 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f5208d137b104fa8e907099268a66c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-164\" (UID: \"4f5208d137b104fa8e907099268a66c1\") " pod="kube-system/kube-apiserver-ip-172-31-16-164" Oct 9 01:05:56.734852 kubelet[3286]: I1009 01:05:56.731512 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f5208d137b104fa8e907099268a66c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-164\" (UID: \"4f5208d137b104fa8e907099268a66c1\") " pod="kube-system/kube-apiserver-ip-172-31-16-164" Oct 9 01:05:56.734852 kubelet[3286]: I1009 01:05:56.731553 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:56.734852 kubelet[3286]: I1009 01:05:56.731587 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:56.735423 kubelet[3286]: I1009 01:05:56.731620 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad81eabbd0f6a8ca780b0aeada2907ea-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-164\" (UID: \"ad81eabbd0f6a8ca780b0aeada2907ea\") " pod="kube-system/kube-scheduler-ip-172-31-16-164" Oct 9 01:05:56.735423 kubelet[3286]: I1009 01:05:56.732064 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:56.741844 kubelet[3286]: I1009 01:05:56.739233 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:56.741844 kubelet[3286]: I1009 01:05:56.740452 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20af47f273636c384e3110e68c18550-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-164\" (UID: \"b20af47f273636c384e3110e68c18550\") " pod="kube-system/kube-controller-manager-ip-172-31-16-164" Oct 9 01:05:56.813409 kubelet[3286]: I1009 01:05:56.792786 3286 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:05:56.850664 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3298) Oct 9 01:05:57.167017 kubelet[3286]: I1009 01:05:57.166916 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-164" podStartSLOduration=1.166893219 podStartE2EDuration="1.166893219s" podCreationTimestamp="2024-10-09 01:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:05:57.123803114 +0000 UTC m=+1.706930077" watchObservedRunningTime="2024-10-09 01:05:57.166893219 +0000 UTC m=+1.750020182" Oct 9 01:05:57.207850 kubelet[3286]: I1009 01:05:57.207756 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-164" podStartSLOduration=1.207728332 podStartE2EDuration="1.207728332s" podCreationTimestamp="2024-10-09 01:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:05:57.166546621 +0000 UTC m=+1.749673584" watchObservedRunningTime="2024-10-09 01:05:57.207728332 +0000 UTC m=+1.790855289" Oct 9 01:05:57.240054 kubelet[3286]: I1009 01:05:57.239892 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-164" podStartSLOduration=1.239870194 podStartE2EDuration="1.239870194s" podCreationTimestamp="2024-10-09 01:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:05:57.208757434 +0000 UTC m=+1.791884419" watchObservedRunningTime="2024-10-09 01:05:57.239870194 +0000 UTC m=+1.822997166" Oct 9 01:06:02.997900 sudo[2300]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:03.022880 sshd[2297]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:03.028787 systemd[1]: sshd@6-172.31.16.164:22-147.75.109.163:47888.service: Deactivated successfully. Oct 9 01:06:03.032297 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:06:03.032627 systemd[1]: session-7.scope: Consumed 5.249s CPU time, 184.2M memory peak, 0B memory swap peak. Oct 9 01:06:03.034764 systemd-logind[1957]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:06:03.036480 systemd-logind[1957]: Removed session 7. Oct 9 01:06:09.671356 kubelet[3286]: I1009 01:06:09.671147 3286 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:06:09.673308 containerd[1978]: time="2024-10-09T01:06:09.672391852Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:06:09.673814 kubelet[3286]: I1009 01:06:09.672853 3286 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:06:09.893072 kubelet[3286]: I1009 01:06:09.892839 3286 topology_manager.go:215] "Topology Admit Handler" podUID="2d7d1d90-4a84-4ffb-a57a-845eef739e52" podNamespace="kube-system" podName="kube-proxy-zgtkx" Oct 9 01:06:09.902845 kubelet[3286]: I1009 01:06:09.902054 3286 topology_manager.go:215] "Topology Admit Handler" podUID="2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-b5t7s" Oct 9 01:06:09.907544 kubelet[3286]: W1009 01:06:09.907354 3286 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-164" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-164' and this object Oct 9 01:06:09.907698 kubelet[3286]: E1009 01:06:09.907555 3286 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-164" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-164' and this object Oct 9 01:06:09.907698 kubelet[3286]: W1009 01:06:09.907408 3286 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-16-164" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-164' and this object Oct 9 01:06:09.907698 kubelet[3286]: E1009 01:06:09.907580 3286 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-16-164" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-164' and this object Oct 9 01:06:09.924310 systemd[1]: Created slice kubepods-besteffort-pod2d7d1d90_4a84_4ffb_a57a_845eef739e52.slice - libcontainer container kubepods-besteffort-pod2d7d1d90_4a84_4ffb_a57a_845eef739e52.slice. Oct 9 01:06:09.949932 systemd[1]: Created slice kubepods-besteffort-pod2a30e8ae_9a91_4da7_9bae_5cdc7b81f22f.slice - libcontainer container kubepods-besteffort-pod2a30e8ae_9a91_4da7_9bae_5cdc7b81f22f.slice. Oct 9 01:06:09.958601 kubelet[3286]: I1009 01:06:09.958213 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d7d1d90-4a84-4ffb-a57a-845eef739e52-kube-proxy\") pod \"kube-proxy-zgtkx\" (UID: \"2d7d1d90-4a84-4ffb-a57a-845eef739e52\") " pod="kube-system/kube-proxy-zgtkx" Oct 9 01:06:09.958601 kubelet[3286]: I1009 01:06:09.958262 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d7d1d90-4a84-4ffb-a57a-845eef739e52-xtables-lock\") pod \"kube-proxy-zgtkx\" (UID: \"2d7d1d90-4a84-4ffb-a57a-845eef739e52\") " pod="kube-system/kube-proxy-zgtkx" Oct 9 01:06:09.958601 kubelet[3286]: I1009 01:06:09.958317 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d7d1d90-4a84-4ffb-a57a-845eef739e52-lib-modules\") pod \"kube-proxy-zgtkx\" (UID: \"2d7d1d90-4a84-4ffb-a57a-845eef739e52\") " pod="kube-system/kube-proxy-zgtkx" Oct 9 01:06:09.958601 kubelet[3286]: I1009 01:06:09.958343 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2n4\" (UniqueName: \"kubernetes.io/projected/2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f-kube-api-access-bv2n4\") pod \"tigera-operator-77f994b5bb-b5t7s\" (UID: \"2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f\") " pod="tigera-operator/tigera-operator-77f994b5bb-b5t7s" Oct 9 01:06:09.958601 kubelet[3286]: I1009 01:06:09.958411 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrrg\" (UniqueName: \"kubernetes.io/projected/2d7d1d90-4a84-4ffb-a57a-845eef739e52-kube-api-access-ghrrg\") pod \"kube-proxy-zgtkx\" (UID: \"2d7d1d90-4a84-4ffb-a57a-845eef739e52\") " pod="kube-system/kube-proxy-zgtkx" Oct 9 01:06:09.959192 kubelet[3286]: I1009 01:06:09.958436 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f-var-lib-calico\") pod \"tigera-operator-77f994b5bb-b5t7s\" (UID: \"2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f\") " pod="tigera-operator/tigera-operator-77f994b5bb-b5t7s" Oct 9 01:06:10.259126 containerd[1978]: time="2024-10-09T01:06:10.258909673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-b5t7s,Uid:2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:06:10.344881 containerd[1978]: time="2024-10-09T01:06:10.342412685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:10.344881 containerd[1978]: time="2024-10-09T01:06:10.342550803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:10.344881 containerd[1978]: time="2024-10-09T01:06:10.342955565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:10.344881 containerd[1978]: time="2024-10-09T01:06:10.343155023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:10.378144 systemd[1]: Started cri-containerd-486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6.scope - libcontainer container 486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6. Oct 9 01:06:10.430200 containerd[1978]: time="2024-10-09T01:06:10.430065624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-b5t7s,Uid:2a30e8ae-9a91-4da7-9bae-5cdc7b81f22f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6\"" Oct 9 01:06:10.467620 containerd[1978]: time="2024-10-09T01:06:10.467572145Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:06:11.068593 kubelet[3286]: E1009 01:06:11.068392 3286 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 9 01:06:11.069414 kubelet[3286]: E1009 01:06:11.068597 3286 projected.go:200] Error preparing data for projected volume kube-api-access-ghrrg for pod kube-system/kube-proxy-zgtkx: failed to sync configmap cache: timed out waiting for the condition Oct 9 01:06:11.069414 kubelet[3286]: E1009 01:06:11.068697 3286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d7d1d90-4a84-4ffb-a57a-845eef739e52-kube-api-access-ghrrg podName:2d7d1d90-4a84-4ffb-a57a-845eef739e52 nodeName:}" failed. No retries permitted until 2024-10-09 01:06:11.56867076 +0000 UTC m=+16.151797718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ghrrg" (UniqueName: "kubernetes.io/projected/2d7d1d90-4a84-4ffb-a57a-845eef739e52-kube-api-access-ghrrg") pod "kube-proxy-zgtkx" (UID: "2d7d1d90-4a84-4ffb-a57a-845eef739e52") : failed to sync configmap cache: timed out waiting for the condition Oct 9 01:06:11.745856 containerd[1978]: time="2024-10-09T01:06:11.745730973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zgtkx,Uid:2d7d1d90-4a84-4ffb-a57a-845eef739e52,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:11.883167 containerd[1978]: time="2024-10-09T01:06:11.882909793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:11.883167 containerd[1978]: time="2024-10-09T01:06:11.882998499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:11.883167 containerd[1978]: time="2024-10-09T01:06:11.883015159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:11.883167 containerd[1978]: time="2024-10-09T01:06:11.883111545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:11.904128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964794842.mount: Deactivated successfully. Oct 9 01:06:11.939097 systemd[1]: Started cri-containerd-4fbf6bf30c38b3b4a54643458e94e66abf7f116c7c0caa22e297e81553ba5358.scope - libcontainer container 4fbf6bf30c38b3b4a54643458e94e66abf7f116c7c0caa22e297e81553ba5358. Oct 9 01:06:11.988905 containerd[1978]: time="2024-10-09T01:06:11.988860144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zgtkx,Uid:2d7d1d90-4a84-4ffb-a57a-845eef739e52,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fbf6bf30c38b3b4a54643458e94e66abf7f116c7c0caa22e297e81553ba5358\"" Oct 9 01:06:12.057975 containerd[1978]: time="2024-10-09T01:06:12.056941166Z" level=info msg="CreateContainer within sandbox \"4fbf6bf30c38b3b4a54643458e94e66abf7f116c7c0caa22e297e81553ba5358\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:06:12.093337 containerd[1978]: time="2024-10-09T01:06:12.093288905Z" level=info msg="CreateContainer within sandbox \"4fbf6bf30c38b3b4a54643458e94e66abf7f116c7c0caa22e297e81553ba5358\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"670fbe43dcea1f84216d00c667b13df0fbcde954025583011625acc64fa5bd44\"" Oct 9 01:06:12.096937 containerd[1978]: time="2024-10-09T01:06:12.095304891Z" level=info msg="StartContainer for \"670fbe43dcea1f84216d00c667b13df0fbcde954025583011625acc64fa5bd44\"" Oct 9 01:06:12.165590 systemd[1]: Started cri-containerd-670fbe43dcea1f84216d00c667b13df0fbcde954025583011625acc64fa5bd44.scope - libcontainer container 670fbe43dcea1f84216d00c667b13df0fbcde954025583011625acc64fa5bd44. Oct 9 01:06:12.251141 containerd[1978]: time="2024-10-09T01:06:12.251100951Z" level=info msg="StartContainer for \"670fbe43dcea1f84216d00c667b13df0fbcde954025583011625acc64fa5bd44\" returns successfully" Oct 9 01:06:13.375088 containerd[1978]: time="2024-10-09T01:06:13.375042414Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:13.377440 containerd[1978]: time="2024-10-09T01:06:13.377109094Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Oct 9 01:06:13.379489 containerd[1978]: time="2024-10-09T01:06:13.379444537Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:13.385992 containerd[1978]: time="2024-10-09T01:06:13.385932173Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:13.395406 containerd[1978]: time="2024-10-09T01:06:13.395314171Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.927678642s" Oct 9 01:06:13.395406 containerd[1978]: time="2024-10-09T01:06:13.395410063Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:06:13.421758 containerd[1978]: time="2024-10-09T01:06:13.421708890Z" level=info msg="CreateContainer within sandbox \"486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:06:13.458440 containerd[1978]: time="2024-10-09T01:06:13.458393580Z" level=info msg="CreateContainer within sandbox \"486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790\"" Oct 9 01:06:13.459305 containerd[1978]: time="2024-10-09T01:06:13.459271271Z" level=info msg="StartContainer for \"83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790\"" Oct 9 01:06:13.518387 systemd[1]: Started cri-containerd-83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790.scope - libcontainer container 83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790. Oct 9 01:06:13.589099 containerd[1978]: time="2024-10-09T01:06:13.588429881Z" level=info msg="StartContainer for \"83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790\" returns successfully" Oct 9 01:06:14.241809 kubelet[3286]: I1009 01:06:14.237517 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zgtkx" podStartSLOduration=5.237491902 podStartE2EDuration="5.237491902s" podCreationTimestamp="2024-10-09 01:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:13.304229592 +0000 UTC m=+17.887356555" watchObservedRunningTime="2024-10-09 01:06:14.237491902 +0000 UTC m=+18.820618867" Oct 9 01:06:14.243041 kubelet[3286]: I1009 01:06:14.242643 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-b5t7s" podStartSLOduration=2.268309219 podStartE2EDuration="5.242595163s" podCreationTimestamp="2024-10-09 01:06:09 +0000 UTC" firstStartedPulling="2024-10-09 01:06:10.43196663 +0000 UTC m=+15.015093572" lastFinishedPulling="2024-10-09 01:06:13.40625257 +0000 UTC m=+17.989379516" observedRunningTime="2024-10-09 01:06:14.241842093 +0000 UTC m=+18.824969036" watchObservedRunningTime="2024-10-09 01:06:14.242595163 +0000 UTC m=+18.825722131" Oct 9 01:06:17.025890 kubelet[3286]: I1009 01:06:17.024227 3286 topology_manager.go:215] "Topology Admit Handler" podUID="c12997a7-0112-4663-8c25-f94eb828a2f2" podNamespace="calico-system" podName="calico-typha-5fb687b967-9php4" Oct 9 01:06:17.041677 systemd[1]: Created slice kubepods-besteffort-podc12997a7_0112_4663_8c25_f94eb828a2f2.slice - libcontainer container kubepods-besteffort-podc12997a7_0112_4663_8c25_f94eb828a2f2.slice. Oct 9 01:06:17.116963 kubelet[3286]: I1009 01:06:17.116918 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8sc2\" (UniqueName: \"kubernetes.io/projected/c12997a7-0112-4663-8c25-f94eb828a2f2-kube-api-access-c8sc2\") pod \"calico-typha-5fb687b967-9php4\" (UID: \"c12997a7-0112-4663-8c25-f94eb828a2f2\") " pod="calico-system/calico-typha-5fb687b967-9php4" Oct 9 01:06:17.116963 kubelet[3286]: I1009 01:06:17.116969 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c12997a7-0112-4663-8c25-f94eb828a2f2-typha-certs\") pod \"calico-typha-5fb687b967-9php4\" (UID: \"c12997a7-0112-4663-8c25-f94eb828a2f2\") " pod="calico-system/calico-typha-5fb687b967-9php4" Oct 9 01:06:17.117185 kubelet[3286]: I1009 01:06:17.116996 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c12997a7-0112-4663-8c25-f94eb828a2f2-tigera-ca-bundle\") pod \"calico-typha-5fb687b967-9php4\" (UID: \"c12997a7-0112-4663-8c25-f94eb828a2f2\") " pod="calico-system/calico-typha-5fb687b967-9php4" Oct 9 01:06:17.144184 kubelet[3286]: I1009 01:06:17.142607 3286 topology_manager.go:215] "Topology Admit Handler" podUID="98d83c46-8090-457a-abdd-0165130c522f" podNamespace="calico-system" podName="calico-node-rz8db" Oct 9 01:06:17.155320 systemd[1]: Created slice kubepods-besteffort-pod98d83c46_8090_457a_abdd_0165130c522f.slice - libcontainer container kubepods-besteffort-pod98d83c46_8090_457a_abdd_0165130c522f.slice. Oct 9 01:06:17.217756 kubelet[3286]: I1009 01:06:17.217703 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-xtables-lock\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.217756 kubelet[3286]: I1009 01:06:17.217749 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-var-run-calico\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.218780 kubelet[3286]: I1009 01:06:17.217773 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-cni-log-dir\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.218780 kubelet[3286]: I1009 01:06:17.217815 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-lib-modules\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.218780 kubelet[3286]: I1009 01:06:17.217862 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-var-lib-calico\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.218780 kubelet[3286]: I1009 01:06:17.217914 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-cni-bin-dir\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.218780 kubelet[3286]: I1009 01:06:17.217988 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98d83c46-8090-457a-abdd-0165130c522f-node-certs\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.219045 kubelet[3286]: I1009 01:06:17.218015 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-cni-net-dir\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.219045 kubelet[3286]: I1009 01:06:17.218040 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-flexvol-driver-host\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.219045 kubelet[3286]: I1009 01:06:17.218064 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9bj\" (UniqueName: \"kubernetes.io/projected/98d83c46-8090-457a-abdd-0165130c522f-kube-api-access-7v9bj\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.219045 kubelet[3286]: I1009 01:06:17.218115 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d83c46-8090-457a-abdd-0165130c522f-tigera-ca-bundle\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.219045 kubelet[3286]: I1009 01:06:17.218153 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98d83c46-8090-457a-abdd-0165130c522f-policysync\") pod \"calico-node-rz8db\" (UID: \"98d83c46-8090-457a-abdd-0165130c522f\") " pod="calico-system/calico-node-rz8db" Oct 9 01:06:17.311936 kubelet[3286]: I1009 01:06:17.310268 3286 topology_manager.go:215] "Topology Admit Handler" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" podNamespace="calico-system" podName="csi-node-driver-554cf" Oct 9 01:06:17.311936 kubelet[3286]: E1009 01:06:17.310801 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:17.328628 kubelet[3286]: E1009 01:06:17.328587 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.328809 kubelet[3286]: W1009 01:06:17.328789 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.328981 kubelet[3286]: E1009 01:06:17.328966 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.354074 containerd[1978]: time="2024-10-09T01:06:17.353998903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb687b967-9php4,Uid:c12997a7-0112-4663-8c25-f94eb828a2f2,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:17.357301 kubelet[3286]: E1009 01:06:17.357274 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.357482 kubelet[3286]: W1009 01:06:17.357463 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.357609 kubelet[3286]: E1009 01:06:17.357563 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.390663 kubelet[3286]: E1009 01:06:17.389289 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.390663 kubelet[3286]: W1009 01:06:17.389511 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.390663 kubelet[3286]: E1009 01:06:17.389545 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.391440 kubelet[3286]: E1009 01:06:17.391307 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.391440 kubelet[3286]: W1009 01:06:17.391333 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.391440 kubelet[3286]: E1009 01:06:17.391352 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.392251 kubelet[3286]: E1009 01:06:17.392152 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.392251 kubelet[3286]: W1009 01:06:17.392199 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.392251 kubelet[3286]: E1009 01:06:17.392218 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.394237 kubelet[3286]: E1009 01:06:17.393839 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.394237 kubelet[3286]: W1009 01:06:17.393872 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.394237 kubelet[3286]: E1009 01:06:17.393908 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.395305 kubelet[3286]: E1009 01:06:17.394541 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.395305 kubelet[3286]: W1009 01:06:17.394666 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.395305 kubelet[3286]: E1009 01:06:17.394685 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.395492 kubelet[3286]: E1009 01:06:17.395324 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.395492 kubelet[3286]: W1009 01:06:17.395446 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.395492 kubelet[3286]: E1009 01:06:17.395464 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.397682 kubelet[3286]: E1009 01:06:17.395949 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.397682 kubelet[3286]: W1009 01:06:17.395963 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.397682 kubelet[3286]: E1009 01:06:17.395978 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.397682 kubelet[3286]: E1009 01:06:17.396517 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.397682 kubelet[3286]: W1009 01:06:17.396528 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.397682 kubelet[3286]: E1009 01:06:17.396651 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.397682 kubelet[3286]: E1009 01:06:17.397266 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.397682 kubelet[3286]: W1009 01:06:17.397277 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.400239 kubelet[3286]: E1009 01:06:17.397291 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.400612 kubelet[3286]: E1009 01:06:17.400590 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.400688 kubelet[3286]: W1009 01:06:17.400612 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.400688 kubelet[3286]: E1009 01:06:17.400631 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.401677 kubelet[3286]: E1009 01:06:17.401635 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.401677 kubelet[3286]: W1009 01:06:17.401656 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.401677 kubelet[3286]: E1009 01:06:17.401675 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.402279 kubelet[3286]: E1009 01:06:17.402250 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.404305 kubelet[3286]: W1009 01:06:17.402461 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.404305 kubelet[3286]: E1009 01:06:17.402483 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.404588 kubelet[3286]: E1009 01:06:17.404455 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.404588 kubelet[3286]: W1009 01:06:17.404470 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.404588 kubelet[3286]: E1009 01:06:17.404486 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.405229 kubelet[3286]: E1009 01:06:17.405202 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.405356 kubelet[3286]: W1009 01:06:17.405319 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.405356 kubelet[3286]: E1009 01:06:17.405344 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.407978 kubelet[3286]: E1009 01:06:17.407955 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.407978 kubelet[3286]: W1009 01:06:17.407972 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.408254 kubelet[3286]: E1009 01:06:17.407989 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.408389 kubelet[3286]: E1009 01:06:17.408369 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.408389 kubelet[3286]: W1009 01:06:17.408386 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.408661 kubelet[3286]: E1009 01:06:17.408402 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.408910 kubelet[3286]: E1009 01:06:17.408891 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.408910 kubelet[3286]: W1009 01:06:17.408907 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.409137 kubelet[3286]: E1009 01:06:17.408921 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.410359 kubelet[3286]: E1009 01:06:17.410210 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.410359 kubelet[3286]: W1009 01:06:17.410225 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.410359 kubelet[3286]: E1009 01:06:17.410239 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.411467 kubelet[3286]: E1009 01:06:17.411450 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.411467 kubelet[3286]: W1009 01:06:17.411466 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.412733 kubelet[3286]: E1009 01:06:17.411481 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.413246 kubelet[3286]: E1009 01:06:17.413214 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.413246 kubelet[3286]: W1009 01:06:17.413234 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.413368 kubelet[3286]: E1009 01:06:17.413250 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.429558 kubelet[3286]: E1009 01:06:17.429063 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.429558 kubelet[3286]: W1009 01:06:17.429093 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.429558 kubelet[3286]: E1009 01:06:17.429121 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.429558 kubelet[3286]: I1009 01:06:17.429168 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3b7d4d8-eaee-47df-9d20-3c65da15fec6-kubelet-dir\") pod \"csi-node-driver-554cf\" (UID: \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\") " pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:17.429934 kubelet[3286]: E1009 01:06:17.429582 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.429934 kubelet[3286]: W1009 01:06:17.429598 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.429934 kubelet[3286]: E1009 01:06:17.429632 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.429934 kubelet[3286]: I1009 01:06:17.429659 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4zw\" (UniqueName: \"kubernetes.io/projected/f3b7d4d8-eaee-47df-9d20-3c65da15fec6-kube-api-access-sz4zw\") pod \"csi-node-driver-554cf\" (UID: \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\") " pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:17.430436 kubelet[3286]: E1009 01:06:17.430311 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.430436 kubelet[3286]: W1009 01:06:17.430325 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.430436 kubelet[3286]: E1009 01:06:17.430403 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.431142 kubelet[3286]: E1009 01:06:17.430947 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.431142 kubelet[3286]: W1009 01:06:17.430972 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.431142 kubelet[3286]: E1009 01:06:17.430995 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.431142 kubelet[3286]: I1009 01:06:17.431021 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f3b7d4d8-eaee-47df-9d20-3c65da15fec6-varrun\") pod \"csi-node-driver-554cf\" (UID: \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\") " pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:17.431935 kubelet[3286]: E1009 01:06:17.431789 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.431935 kubelet[3286]: W1009 01:06:17.431807 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.431935 kubelet[3286]: E1009 01:06:17.431835 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.432316 kubelet[3286]: E1009 01:06:17.432302 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.432510 kubelet[3286]: W1009 01:06:17.432495 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.432708 kubelet[3286]: E1009 01:06:17.432577 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.433263 kubelet[3286]: E1009 01:06:17.433006 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.433263 kubelet[3286]: W1009 01:06:17.433018 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.433263 kubelet[3286]: E1009 01:06:17.433036 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.433263 kubelet[3286]: I1009 01:06:17.433061 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3b7d4d8-eaee-47df-9d20-3c65da15fec6-socket-dir\") pod \"csi-node-driver-554cf\" (UID: \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\") " pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:17.433802 kubelet[3286]: E1009 01:06:17.433611 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.433802 kubelet[3286]: W1009 01:06:17.433628 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.433802 kubelet[3286]: E1009 01:06:17.433643 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.434281 kubelet[3286]: E1009 01:06:17.434139 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.434281 kubelet[3286]: W1009 01:06:17.434153 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.434281 kubelet[3286]: E1009 01:06:17.434166 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.434605 kubelet[3286]: E1009 01:06:17.434593 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.434698 kubelet[3286]: W1009 01:06:17.434675 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.434799 kubelet[3286]: E1009 01:06:17.434787 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.435155 kubelet[3286]: E1009 01:06:17.435142 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.435261 kubelet[3286]: W1009 01:06:17.435230 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.435261 kubelet[3286]: E1009 01:06:17.435247 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.435709 kubelet[3286]: E1009 01:06:17.435695 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.435896 kubelet[3286]: W1009 01:06:17.435867 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.436282 kubelet[3286]: E1009 01:06:17.436036 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.436282 kubelet[3286]: I1009 01:06:17.436065 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3b7d4d8-eaee-47df-9d20-3c65da15fec6-registration-dir\") pod \"csi-node-driver-554cf\" (UID: \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\") " pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:17.436669 kubelet[3286]: E1009 01:06:17.436472 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.436669 kubelet[3286]: W1009 01:06:17.436485 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.436669 kubelet[3286]: E1009 01:06:17.436498 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.437534 kubelet[3286]: E1009 01:06:17.437379 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.437534 kubelet[3286]: W1009 01:06:17.437392 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.437534 kubelet[3286]: E1009 01:06:17.437423 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.438711 kubelet[3286]: E1009 01:06:17.438698 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.438868 kubelet[3286]: W1009 01:06:17.438777 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.438868 kubelet[3286]: E1009 01:06:17.438794 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.446906 containerd[1978]: time="2024-10-09T01:06:17.446111985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:17.446906 containerd[1978]: time="2024-10-09T01:06:17.446368629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:17.446906 containerd[1978]: time="2024-10-09T01:06:17.446391108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:17.446906 containerd[1978]: time="2024-10-09T01:06:17.446600043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:17.462564 containerd[1978]: time="2024-10-09T01:06:17.462510856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rz8db,Uid:98d83c46-8090-457a-abdd-0165130c522f,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:17.490951 systemd[1]: Started cri-containerd-178b525a523c7dcb45a2a898dda423c3a9e596937e148cad5cab490bd40f888b.scope - libcontainer container 178b525a523c7dcb45a2a898dda423c3a9e596937e148cad5cab490bd40f888b. Oct 9 01:06:17.538853 kubelet[3286]: E1009 01:06:17.538509 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.538853 kubelet[3286]: W1009 01:06:17.538537 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.538853 kubelet[3286]: E1009 01:06:17.538563 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.541858 kubelet[3286]: E1009 01:06:17.541307 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.541858 kubelet[3286]: W1009 01:06:17.541328 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.541858 kubelet[3286]: E1009 01:06:17.541353 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.543024 kubelet[3286]: E1009 01:06:17.542926 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.543024 kubelet[3286]: W1009 01:06:17.542988 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.543243 kubelet[3286]: E1009 01:06:17.543023 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.544330 kubelet[3286]: E1009 01:06:17.544298 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.544330 kubelet[3286]: W1009 01:06:17.544316 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.544467 kubelet[3286]: E1009 01:06:17.544336 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.545028 kubelet[3286]: E1009 01:06:17.544996 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.545028 kubelet[3286]: W1009 01:06:17.545017 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.545451 kubelet[3286]: E1009 01:06:17.545137 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.546755 kubelet[3286]: E1009 01:06:17.546735 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.546755 kubelet[3286]: W1009 01:06:17.546754 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.548874 kubelet[3286]: E1009 01:06:17.548176 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.548874 kubelet[3286]: E1009 01:06:17.548579 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.548874 kubelet[3286]: W1009 01:06:17.548590 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.548874 kubelet[3286]: E1009 01:06:17.548718 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.549715 kubelet[3286]: E1009 01:06:17.549357 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.549715 kubelet[3286]: W1009 01:06:17.549368 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.549715 kubelet[3286]: E1009 01:06:17.549388 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.550240 kubelet[3286]: E1009 01:06:17.550057 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.550240 kubelet[3286]: W1009 01:06:17.550071 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.550240 kubelet[3286]: E1009 01:06:17.550154 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.552493 kubelet[3286]: E1009 01:06:17.552470 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.552493 kubelet[3286]: W1009 01:06:17.552492 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.553008 kubelet[3286]: E1009 01:06:17.552959 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.553858 kubelet[3286]: E1009 01:06:17.553570 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.553858 kubelet[3286]: W1009 01:06:17.553585 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.554074 kubelet[3286]: E1009 01:06:17.554030 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.555245 kubelet[3286]: E1009 01:06:17.555223 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.555508 kubelet[3286]: W1009 01:06:17.555242 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.555896 kubelet[3286]: E1009 01:06:17.555873 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.560693 kubelet[3286]: E1009 01:06:17.560355 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.560693 kubelet[3286]: W1009 01:06:17.560377 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.562121 kubelet[3286]: E1009 01:06:17.561152 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.563266 kubelet[3286]: E1009 01:06:17.562234 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.563266 kubelet[3286]: W1009 01:06:17.562250 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.563266 kubelet[3286]: E1009 01:06:17.562867 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.564012 kubelet[3286]: E1009 01:06:17.563997 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.565111 kubelet[3286]: W1009 01:06:17.565070 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.570081 kubelet[3286]: E1009 01:06:17.565292 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.570081 kubelet[3286]: E1009 01:06:17.568160 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.570081 kubelet[3286]: W1009 01:06:17.568177 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.570081 kubelet[3286]: E1009 01:06:17.568218 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.576848 kubelet[3286]: E1009 01:06:17.575734 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.576848 kubelet[3286]: W1009 01:06:17.575764 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.576848 kubelet[3286]: E1009 01:06:17.575859 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.582734 kubelet[3286]: E1009 01:06:17.579015 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.582734 kubelet[3286]: W1009 01:06:17.579035 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.582734 kubelet[3286]: E1009 01:06:17.579076 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.582734 kubelet[3286]: E1009 01:06:17.579905 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.582734 kubelet[3286]: W1009 01:06:17.579921 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.582734 kubelet[3286]: E1009 01:06:17.580017 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.585037 kubelet[3286]: E1009 01:06:17.584491 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.585037 kubelet[3286]: W1009 01:06:17.584512 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.585037 kubelet[3286]: E1009 01:06:17.584554 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.585037 kubelet[3286]: E1009 01:06:17.584903 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.585037 kubelet[3286]: W1009 01:06:17.584916 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.585037 kubelet[3286]: E1009 01:06:17.584950 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.586719 kubelet[3286]: E1009 01:06:17.586520 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.586719 kubelet[3286]: W1009 01:06:17.586534 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.586807 kubelet[3286]: E1009 01:06:17.586747 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.589590 kubelet[3286]: E1009 01:06:17.587325 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.589590 kubelet[3286]: W1009 01:06:17.587339 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.589590 kubelet[3286]: E1009 01:06:17.588909 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.591375 kubelet[3286]: E1009 01:06:17.590998 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.591375 kubelet[3286]: W1009 01:06:17.591027 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.591375 kubelet[3286]: E1009 01:06:17.591055 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.592342 kubelet[3286]: E1009 01:06:17.592125 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.592562 kubelet[3286]: W1009 01:06:17.592521 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.593012 kubelet[3286]: E1009 01:06:17.592876 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.623031 kubelet[3286]: E1009 01:06:17.622921 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:17.623031 kubelet[3286]: W1009 01:06:17.622949 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:17.623031 kubelet[3286]: E1009 01:06:17.622981 3286 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:17.642595 containerd[1978]: time="2024-10-09T01:06:17.642221901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:17.642595 containerd[1978]: time="2024-10-09T01:06:17.642317369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:17.642595 containerd[1978]: time="2024-10-09T01:06:17.642346304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:17.642595 containerd[1978]: time="2024-10-09T01:06:17.642491826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:17.680051 systemd[1]: Started cri-containerd-764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26.scope - libcontainer container 764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26. Oct 9 01:06:17.753309 containerd[1978]: time="2024-10-09T01:06:17.753055057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rz8db,Uid:98d83c46-8090-457a-abdd-0165130c522f,Namespace:calico-system,Attempt:0,} returns sandbox id \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\"" Oct 9 01:06:17.756730 containerd[1978]: time="2024-10-09T01:06:17.756668914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb687b967-9php4,Uid:c12997a7-0112-4663-8c25-f94eb828a2f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"178b525a523c7dcb45a2a898dda423c3a9e596937e148cad5cab490bd40f888b\"" Oct 9 01:06:17.758843 containerd[1978]: time="2024-10-09T01:06:17.758343878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:06:18.938498 kubelet[3286]: E1009 01:06:18.938207 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:19.326595 containerd[1978]: time="2024-10-09T01:06:19.325515526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:19.329299 containerd[1978]: time="2024-10-09T01:06:19.329233562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:06:19.331644 containerd[1978]: time="2024-10-09T01:06:19.331604453Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:19.337852 containerd[1978]: time="2024-10-09T01:06:19.337612972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:19.339782 containerd[1978]: time="2024-10-09T01:06:19.339643692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.58114076s" Oct 9 01:06:19.339782 containerd[1978]: time="2024-10-09T01:06:19.339693014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:06:19.344119 containerd[1978]: time="2024-10-09T01:06:19.343802831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:06:19.346327 containerd[1978]: time="2024-10-09T01:06:19.346147455Z" level=info msg="CreateContainer within sandbox \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:06:19.386166 containerd[1978]: time="2024-10-09T01:06:19.386022306Z" level=info msg="CreateContainer within sandbox \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034\"" Oct 9 01:06:19.388922 containerd[1978]: time="2024-10-09T01:06:19.388660276Z" level=info msg="StartContainer for \"29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034\"" Oct 9 01:06:19.479480 systemd[1]: run-containerd-runc-k8s.io-29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034-runc.Xqkd4R.mount: Deactivated successfully. Oct 9 01:06:19.490235 systemd[1]: Started cri-containerd-29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034.scope - libcontainer container 29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034. Oct 9 01:06:19.548498 containerd[1978]: time="2024-10-09T01:06:19.548425153Z" level=info msg="StartContainer for \"29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034\" returns successfully" Oct 9 01:06:19.598471 systemd[1]: cri-containerd-29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034.scope: Deactivated successfully. Oct 9 01:06:19.801947 containerd[1978]: time="2024-10-09T01:06:19.722208879Z" level=info msg="shim disconnected" id=29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034 namespace=k8s.io Oct 9 01:06:19.801947 containerd[1978]: time="2024-10-09T01:06:19.801656773Z" level=warning msg="cleaning up after shim disconnected" id=29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034 namespace=k8s.io Oct 9 01:06:19.801947 containerd[1978]: time="2024-10-09T01:06:19.801681866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:06:20.373919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29598793d668ff92460a8ba5c35ae65ee73b5a8b8be5dfd736b3f1f0ee810034-rootfs.mount: Deactivated successfully. Oct 9 01:06:20.938241 kubelet[3286]: E1009 01:06:20.938203 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:22.755121 containerd[1978]: time="2024-10-09T01:06:22.754095057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:22.755639 containerd[1978]: time="2024-10-09T01:06:22.755595759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:06:22.757668 containerd[1978]: time="2024-10-09T01:06:22.757632436Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:22.760908 containerd[1978]: time="2024-10-09T01:06:22.760875256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:22.762182 containerd[1978]: time="2024-10-09T01:06:22.762146171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.418257152s" Oct 9 01:06:22.762330 containerd[1978]: time="2024-10-09T01:06:22.762310433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:06:22.777149 containerd[1978]: time="2024-10-09T01:06:22.777102185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:06:22.802320 containerd[1978]: time="2024-10-09T01:06:22.801772262Z" level=info msg="CreateContainer within sandbox \"178b525a523c7dcb45a2a898dda423c3a9e596937e148cad5cab490bd40f888b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:06:22.863336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078817969.mount: Deactivated successfully. Oct 9 01:06:22.864853 containerd[1978]: time="2024-10-09T01:06:22.864778800Z" level=info msg="CreateContainer within sandbox \"178b525a523c7dcb45a2a898dda423c3a9e596937e148cad5cab490bd40f888b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7aa3774c0050d747c4a34fd7fd2b73152351070ebd33242d3b25c7c31e6c2c33\"" Oct 9 01:06:22.866014 containerd[1978]: time="2024-10-09T01:06:22.865957575Z" level=info msg="StartContainer for \"7aa3774c0050d747c4a34fd7fd2b73152351070ebd33242d3b25c7c31e6c2c33\"" Oct 9 01:06:22.936122 systemd[1]: Started cri-containerd-7aa3774c0050d747c4a34fd7fd2b73152351070ebd33242d3b25c7c31e6c2c33.scope - libcontainer container 7aa3774c0050d747c4a34fd7fd2b73152351070ebd33242d3b25c7c31e6c2c33. Oct 9 01:06:22.938291 kubelet[3286]: E1009 01:06:22.937918 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:23.030134 containerd[1978]: time="2024-10-09T01:06:23.028924461Z" level=info msg="StartContainer for \"7aa3774c0050d747c4a34fd7fd2b73152351070ebd33242d3b25c7c31e6c2c33\" returns successfully" Oct 9 01:06:23.300121 kubelet[3286]: I1009 01:06:23.299885 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fb687b967-9php4" podStartSLOduration=2.2820102970000002 podStartE2EDuration="7.299861698s" podCreationTimestamp="2024-10-09 01:06:16 +0000 UTC" firstStartedPulling="2024-10-09 01:06:17.75877348 +0000 UTC m=+22.341900423" lastFinishedPulling="2024-10-09 01:06:22.776624879 +0000 UTC m=+27.359751824" observedRunningTime="2024-10-09 01:06:23.298974747 +0000 UTC m=+27.882101711" watchObservedRunningTime="2024-10-09 01:06:23.299861698 +0000 UTC m=+27.882988665" Oct 9 01:06:24.291516 kubelet[3286]: I1009 01:06:24.291482 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:06:24.938486 kubelet[3286]: E1009 01:06:24.938440 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:26.938830 kubelet[3286]: E1009 01:06:26.938741 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:28.003618 containerd[1978]: time="2024-10-09T01:06:28.003569267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:28.005245 containerd[1978]: time="2024-10-09T01:06:28.005184173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:06:28.006889 containerd[1978]: time="2024-10-09T01:06:28.006806023Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:28.010097 containerd[1978]: time="2024-10-09T01:06:28.010036449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:28.011163 containerd[1978]: time="2024-10-09T01:06:28.010894574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.233739039s" Oct 9 01:06:28.011163 containerd[1978]: time="2024-10-09T01:06:28.010947436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:06:28.014305 containerd[1978]: time="2024-10-09T01:06:28.014278065Z" level=info msg="CreateContainer within sandbox \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:06:28.045976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255422636.mount: Deactivated successfully. Oct 9 01:06:28.047169 containerd[1978]: time="2024-10-09T01:06:28.047130951Z" level=info msg="CreateContainer within sandbox \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3\"" Oct 9 01:06:28.048610 containerd[1978]: time="2024-10-09T01:06:28.048583066Z" level=info msg="StartContainer for \"cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3\"" Oct 9 01:06:28.155984 systemd[1]: Started cri-containerd-cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3.scope - libcontainer container cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3. Oct 9 01:06:28.198529 containerd[1978]: time="2024-10-09T01:06:28.198476314Z" level=info msg="StartContainer for \"cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3\" returns successfully" Oct 9 01:06:28.938853 kubelet[3286]: E1009 01:06:28.938769 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:29.229187 systemd[1]: cri-containerd-cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3.scope: Deactivated successfully. Oct 9 01:06:29.275386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3-rootfs.mount: Deactivated successfully. Oct 9 01:06:29.326767 kubelet[3286]: I1009 01:06:29.326741 3286 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:06:29.383934 kubelet[3286]: I1009 01:06:29.383893 3286 topology_manager.go:215] "Topology Admit Handler" podUID="3e699c3b-f10c-4efc-8adb-92aa9bdb1e47" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gsnst" Oct 9 01:06:29.392357 kubelet[3286]: I1009 01:06:29.390779 3286 topology_manager.go:215] "Topology Admit Handler" podUID="a46cf12a-2481-4bbd-9cc4-7841ca00f5d0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-drbth" Oct 9 01:06:29.397546 kubelet[3286]: I1009 01:06:29.397498 3286 topology_manager.go:215] "Topology Admit Handler" podUID="d016466e-20dd-4a19-9b78-c0ff4431d047" podNamespace="calico-system" podName="calico-kube-controllers-76788c6b87-jqmk4" Oct 9 01:06:29.405524 systemd[1]: Created slice kubepods-burstable-pod3e699c3b_f10c_4efc_8adb_92aa9bdb1e47.slice - libcontainer container kubepods-burstable-pod3e699c3b_f10c_4efc_8adb_92aa9bdb1e47.slice. Oct 9 01:06:29.419620 systemd[1]: Created slice kubepods-burstable-poda46cf12a_2481_4bbd_9cc4_7841ca00f5d0.slice - libcontainer container kubepods-burstable-poda46cf12a_2481_4bbd_9cc4_7841ca00f5d0.slice. Oct 9 01:06:29.435553 systemd[1]: Created slice kubepods-besteffort-podd016466e_20dd_4a19_9b78_c0ff4431d047.slice - libcontainer container kubepods-besteffort-podd016466e_20dd_4a19_9b78_c0ff4431d047.slice. Oct 9 01:06:29.479211 kubelet[3286]: I1009 01:06:29.479170 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e699c3b-f10c-4efc-8adb-92aa9bdb1e47-config-volume\") pod \"coredns-7db6d8ff4d-gsnst\" (UID: \"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47\") " pod="kube-system/coredns-7db6d8ff4d-gsnst" Oct 9 01:06:29.479211 kubelet[3286]: I1009 01:06:29.479225 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d016466e-20dd-4a19-9b78-c0ff4431d047-tigera-ca-bundle\") pod \"calico-kube-controllers-76788c6b87-jqmk4\" (UID: \"d016466e-20dd-4a19-9b78-c0ff4431d047\") " pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" Oct 9 01:06:29.479777 kubelet[3286]: I1009 01:06:29.479322 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvm2\" (UniqueName: \"kubernetes.io/projected/d016466e-20dd-4a19-9b78-c0ff4431d047-kube-api-access-5cvm2\") pod \"calico-kube-controllers-76788c6b87-jqmk4\" (UID: \"d016466e-20dd-4a19-9b78-c0ff4431d047\") " pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" Oct 9 01:06:29.479777 kubelet[3286]: I1009 01:06:29.479353 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgvmr\" (UniqueName: \"kubernetes.io/projected/3e699c3b-f10c-4efc-8adb-92aa9bdb1e47-kube-api-access-rgvmr\") pod \"coredns-7db6d8ff4d-gsnst\" (UID: \"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47\") " pod="kube-system/coredns-7db6d8ff4d-gsnst" Oct 9 01:06:29.479777 kubelet[3286]: I1009 01:06:29.479387 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46cf12a-2481-4bbd-9cc4-7841ca00f5d0-config-volume\") pod \"coredns-7db6d8ff4d-drbth\" (UID: \"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0\") " pod="kube-system/coredns-7db6d8ff4d-drbth" Oct 9 01:06:29.479777 kubelet[3286]: I1009 01:06:29.479414 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7d6p\" (UniqueName: \"kubernetes.io/projected/a46cf12a-2481-4bbd-9cc4-7841ca00f5d0-kube-api-access-q7d6p\") pod \"coredns-7db6d8ff4d-drbth\" (UID: \"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0\") " pod="kube-system/coredns-7db6d8ff4d-drbth" Oct 9 01:06:29.531252 containerd[1978]: time="2024-10-09T01:06:29.529623899Z" level=info msg="shim disconnected" id=cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3 namespace=k8s.io Oct 9 01:06:29.531252 containerd[1978]: time="2024-10-09T01:06:29.529692039Z" level=warning msg="cleaning up after shim disconnected" id=cb98c2959ba0a8a92852a6aa95ec0c244ab7c0518706fdd5e2ec2c1c8fbd2ea3 namespace=k8s.io Oct 9 01:06:29.531252 containerd[1978]: time="2024-10-09T01:06:29.529703825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:06:29.712423 containerd[1978]: time="2024-10-09T01:06:29.712377278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsnst,Uid:3e699c3b-f10c-4efc-8adb-92aa9bdb1e47,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:29.729461 containerd[1978]: time="2024-10-09T01:06:29.728992655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-drbth,Uid:a46cf12a-2481-4bbd-9cc4-7841ca00f5d0,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:29.758310 containerd[1978]: time="2024-10-09T01:06:29.758265096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76788c6b87-jqmk4,Uid:d016466e-20dd-4a19-9b78-c0ff4431d047,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:30.185505 containerd[1978]: time="2024-10-09T01:06:30.185301739Z" level=error msg="Failed to destroy network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.193437 containerd[1978]: time="2024-10-09T01:06:30.193366929Z" level=error msg="encountered an error cleaning up failed sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.193693 containerd[1978]: time="2024-10-09T01:06:30.193659456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsnst,Uid:3e699c3b-f10c-4efc-8adb-92aa9bdb1e47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.194275 kubelet[3286]: E1009 01:06:30.194200 3286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.194762 kubelet[3286]: E1009 01:06:30.194292 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gsnst" Oct 9 01:06:30.194762 kubelet[3286]: E1009 01:06:30.194318 3286 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gsnst" Oct 9 01:06:30.194762 kubelet[3286]: E1009 01:06:30.194379 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gsnst_kube-system(3e699c3b-f10c-4efc-8adb-92aa9bdb1e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gsnst_kube-system(3e699c3b-f10c-4efc-8adb-92aa9bdb1e47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gsnst" podUID="3e699c3b-f10c-4efc-8adb-92aa9bdb1e47" Oct 9 01:06:30.205451 containerd[1978]: time="2024-10-09T01:06:30.203954543Z" level=error msg="Failed to destroy network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.205451 containerd[1978]: time="2024-10-09T01:06:30.204491016Z" level=error msg="encountered an error cleaning up failed sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.205451 containerd[1978]: time="2024-10-09T01:06:30.204559736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-drbth,Uid:a46cf12a-2481-4bbd-9cc4-7841ca00f5d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.210069 kubelet[3286]: E1009 01:06:30.209564 3286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.210551 kubelet[3286]: E1009 01:06:30.210506 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-drbth" Oct 9 01:06:30.210762 kubelet[3286]: E1009 01:06:30.210684 3286 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-drbth" Oct 9 01:06:30.211095 kubelet[3286]: E1009 01:06:30.211057 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-drbth_kube-system(a46cf12a-2481-4bbd-9cc4-7841ca00f5d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-drbth_kube-system(a46cf12a-2481-4bbd-9cc4-7841ca00f5d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-drbth" podUID="a46cf12a-2481-4bbd-9cc4-7841ca00f5d0" Oct 9 01:06:30.212963 containerd[1978]: time="2024-10-09T01:06:30.212919098Z" level=error msg="Failed to destroy network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.213290 containerd[1978]: time="2024-10-09T01:06:30.213254145Z" level=error msg="encountered an error cleaning up failed sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.213365 containerd[1978]: time="2024-10-09T01:06:30.213320759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76788c6b87-jqmk4,Uid:d016466e-20dd-4a19-9b78-c0ff4431d047,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.214098 kubelet[3286]: E1009 01:06:30.213732 3286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.214098 kubelet[3286]: E1009 01:06:30.213801 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" Oct 9 01:06:30.214098 kubelet[3286]: E1009 01:06:30.213846 3286 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" Oct 9 01:06:30.214293 kubelet[3286]: E1009 01:06:30.213917 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76788c6b87-jqmk4_calico-system(d016466e-20dd-4a19-9b78-c0ff4431d047)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76788c6b87-jqmk4_calico-system(d016466e-20dd-4a19-9b78-c0ff4431d047)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" podUID="d016466e-20dd-4a19-9b78-c0ff4431d047" Oct 9 01:06:30.281334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee-shm.mount: Deactivated successfully. Oct 9 01:06:30.281483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898-shm.mount: Deactivated successfully. Oct 9 01:06:30.320320 kubelet[3286]: I1009 01:06:30.320278 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:30.324555 containerd[1978]: time="2024-10-09T01:06:30.324515465Z" level=info msg="StopPodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\"" Oct 9 01:06:30.325742 kubelet[3286]: I1009 01:06:30.325710 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:30.326798 containerd[1978]: time="2024-10-09T01:06:30.326695880Z" level=info msg="StopPodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\"" Oct 9 01:06:30.350489 containerd[1978]: time="2024-10-09T01:06:30.348711517Z" level=info msg="Ensure that sandbox 9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898 in task-service has been cleanup successfully" Oct 9 01:06:30.351108 containerd[1978]: time="2024-10-09T01:06:30.350957622Z" level=info msg="Ensure that sandbox 66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee in task-service has been cleanup successfully" Oct 9 01:06:30.387657 containerd[1978]: time="2024-10-09T01:06:30.387288714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:06:30.396604 kubelet[3286]: I1009 01:06:30.396261 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:30.402027 containerd[1978]: time="2024-10-09T01:06:30.401783620Z" level=info msg="StopPodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\"" Oct 9 01:06:30.402343 containerd[1978]: time="2024-10-09T01:06:30.402313978Z" level=info msg="Ensure that sandbox 7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306 in task-service has been cleanup successfully" Oct 9 01:06:30.509581 containerd[1978]: time="2024-10-09T01:06:30.508846363Z" level=error msg="StopPodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" failed" error="failed to destroy network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.513347 kubelet[3286]: E1009 01:06:30.512966 3286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:30.513347 kubelet[3286]: E1009 01:06:30.513129 3286 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898"} Oct 9 01:06:30.513347 kubelet[3286]: E1009 01:06:30.513207 3286 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:06:30.513347 kubelet[3286]: E1009 01:06:30.513240 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gsnst" podUID="3e699c3b-f10c-4efc-8adb-92aa9bdb1e47" Oct 9 01:06:30.516534 containerd[1978]: time="2024-10-09T01:06:30.516439936Z" level=error msg="StopPodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" failed" error="failed to destroy network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.517134 kubelet[3286]: E1009 01:06:30.516726 3286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:30.517134 kubelet[3286]: E1009 01:06:30.516773 3286 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee"} Oct 9 01:06:30.517134 kubelet[3286]: E1009 01:06:30.516849 3286 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:06:30.517908 kubelet[3286]: E1009 01:06:30.517869 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-drbth" podUID="a46cf12a-2481-4bbd-9cc4-7841ca00f5d0" Oct 9 01:06:30.525780 containerd[1978]: time="2024-10-09T01:06:30.525684582Z" level=error msg="StopPodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" failed" error="failed to destroy network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:30.526190 kubelet[3286]: E1009 01:06:30.526143 3286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:30.526397 kubelet[3286]: E1009 01:06:30.526198 3286 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306"} Oct 9 01:06:30.526397 kubelet[3286]: E1009 01:06:30.526244 3286 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d016466e-20dd-4a19-9b78-c0ff4431d047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:06:30.526397 kubelet[3286]: E1009 01:06:30.526345 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d016466e-20dd-4a19-9b78-c0ff4431d047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" podUID="d016466e-20dd-4a19-9b78-c0ff4431d047" Oct 9 01:06:30.945231 systemd[1]: Created slice kubepods-besteffort-podf3b7d4d8_eaee_47df_9d20_3c65da15fec6.slice - libcontainer container kubepods-besteffort-podf3b7d4d8_eaee_47df_9d20_3c65da15fec6.slice. Oct 9 01:06:30.950174 containerd[1978]: time="2024-10-09T01:06:30.950133369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-554cf,Uid:f3b7d4d8-eaee-47df-9d20-3c65da15fec6,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:31.055473 containerd[1978]: time="2024-10-09T01:06:31.055416016Z" level=error msg="Failed to destroy network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:31.056892 containerd[1978]: time="2024-10-09T01:06:31.056182684Z" level=error msg="encountered an error cleaning up failed sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:31.056892 containerd[1978]: time="2024-10-09T01:06:31.056260303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-554cf,Uid:f3b7d4d8-eaee-47df-9d20-3c65da15fec6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:31.057055 kubelet[3286]: E1009 01:06:31.056615 3286 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:31.057055 kubelet[3286]: E1009 01:06:31.056677 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:31.057055 kubelet[3286]: E1009 01:06:31.056705 3286 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-554cf" Oct 9 01:06:31.057294 kubelet[3286]: E1009 01:06:31.056756 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-554cf_calico-system(f3b7d4d8-eaee-47df-9d20-3c65da15fec6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-554cf_calico-system(f3b7d4d8-eaee-47df-9d20-3c65da15fec6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:31.061644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5-shm.mount: Deactivated successfully. Oct 9 01:06:31.399629 kubelet[3286]: I1009 01:06:31.399592 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:31.404548 containerd[1978]: time="2024-10-09T01:06:31.403259190Z" level=info msg="StopPodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\"" Oct 9 01:06:31.404548 containerd[1978]: time="2024-10-09T01:06:31.403561462Z" level=info msg="Ensure that sandbox c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5 in task-service has been cleanup successfully" Oct 9 01:06:31.457961 containerd[1978]: time="2024-10-09T01:06:31.457903747Z" level=error msg="StopPodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" failed" error="failed to destroy network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:31.458624 kubelet[3286]: E1009 01:06:31.458574 3286 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:31.458733 kubelet[3286]: E1009 01:06:31.458631 3286 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5"} Oct 9 01:06:31.458733 kubelet[3286]: E1009 01:06:31.458674 3286 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:06:31.458733 kubelet[3286]: E1009 01:06:31.458706 3286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3b7d4d8-eaee-47df-9d20-3c65da15fec6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-554cf" podUID="f3b7d4d8-eaee-47df-9d20-3c65da15fec6" Oct 9 01:06:38.139607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176101843.mount: Deactivated successfully. Oct 9 01:06:38.239271 containerd[1978]: time="2024-10-09T01:06:38.232166306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:06:38.262805 containerd[1978]: time="2024-10-09T01:06:38.262748305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.865965023s" Oct 9 01:06:38.263181 containerd[1978]: time="2024-10-09T01:06:38.263008774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:06:38.273925 containerd[1978]: time="2024-10-09T01:06:38.273744857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:38.301344 containerd[1978]: time="2024-10-09T01:06:38.301306033Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:38.306664 containerd[1978]: time="2024-10-09T01:06:38.306376709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:38.346703 containerd[1978]: time="2024-10-09T01:06:38.346618680Z" level=info msg="CreateContainer within sandbox \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:06:38.461449 containerd[1978]: time="2024-10-09T01:06:38.460933736Z" level=info msg="CreateContainer within sandbox \"764cf0707342b0db6f78ef74cd7a9d96d38e828c824f0469051d9623cf1c7b26\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe\"" Oct 9 01:06:38.464159 containerd[1978]: time="2024-10-09T01:06:38.464020538Z" level=info msg="StartContainer for \"ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe\"" Oct 9 01:06:38.472194 kubelet[3286]: I1009 01:06:38.472159 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:06:38.683420 systemd[1]: Started cri-containerd-ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe.scope - libcontainer container ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe. Oct 9 01:06:38.855558 containerd[1978]: time="2024-10-09T01:06:38.855156141Z" level=info msg="StartContainer for \"ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe\" returns successfully" Oct 9 01:06:39.200308 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:06:39.202804 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:06:40.943058 containerd[1978]: time="2024-10-09T01:06:40.940224719Z" level=info msg="StopPodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\"" Oct 9 01:06:41.145329 kubelet[3286]: I1009 01:06:41.143124 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rz8db" podStartSLOduration=3.572480326 podStartE2EDuration="24.116852957s" podCreationTimestamp="2024-10-09 01:06:17 +0000 UTC" firstStartedPulling="2024-10-09 01:06:17.757619979 +0000 UTC m=+22.340746922" lastFinishedPulling="2024-10-09 01:06:38.301992598 +0000 UTC m=+42.885119553" observedRunningTime="2024-10-09 01:06:39.583452211 +0000 UTC m=+44.166579175" watchObservedRunningTime="2024-10-09 01:06:41.116852957 +0000 UTC m=+45.699979920" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.114 [INFO][4592] k8s.go 608: Cleaning up netns ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.116 [INFO][4592] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" iface="eth0" netns="/var/run/netns/cni-d963e400-b904-1f08-da7c-d9de80865a12" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.117 [INFO][4592] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" iface="eth0" netns="/var/run/netns/cni-d963e400-b904-1f08-da7c-d9de80865a12" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.119 [INFO][4592] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" iface="eth0" netns="/var/run/netns/cni-d963e400-b904-1f08-da7c-d9de80865a12" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.119 [INFO][4592] k8s.go 615: Releasing IP address(es) ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.119 [INFO][4592] utils.go 188: Calico CNI releasing IP address ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.365 [INFO][4629] ipam_plugin.go 417: Releasing address using handleID ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.366 [INFO][4629] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.367 [INFO][4629] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.385 [WARNING][4629] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.385 [INFO][4629] ipam_plugin.go 445: Releasing address using workloadID ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.390 [INFO][4629] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:41.397765 containerd[1978]: 2024-10-09 01:06:41.394 [INFO][4592] k8s.go 621: Teardown processing complete. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:41.399763 containerd[1978]: time="2024-10-09T01:06:41.398377817Z" level=info msg="TearDown network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" successfully" Oct 9 01:06:41.399763 containerd[1978]: time="2024-10-09T01:06:41.398876224Z" level=info msg="StopPodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" returns successfully" Oct 9 01:06:41.402298 systemd[1]: run-netns-cni\x2dd963e400\x2db904\x2d1f08\x2dda7c\x2dd9de80865a12.mount: Deactivated successfully. Oct 9 01:06:41.406341 containerd[1978]: time="2024-10-09T01:06:41.403547049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-drbth,Uid:a46cf12a-2481-4bbd-9cc4-7841ca00f5d0,Namespace:kube-system,Attempt:1,}" Oct 9 01:06:41.686231 (udev-worker)[4692]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:06:41.716070 systemd-networkd[1817]: cali96d382e19ee: Link UP Oct 9 01:06:41.716298 systemd-networkd[1817]: cali96d382e19ee: Gained carrier Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.487 [INFO][4665] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.520 [INFO][4665] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0 coredns-7db6d8ff4d- kube-system a46cf12a-2481-4bbd-9cc4-7841ca00f5d0 740 0 2024-10-09 01:06:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-164 coredns-7db6d8ff4d-drbth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96d382e19ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.520 [INFO][4665] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.583 [INFO][4677] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" HandleID="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.596 [INFO][4677] ipam_plugin.go 270: Auto assigning IP ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" HandleID="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318980), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-164", "pod":"coredns-7db6d8ff4d-drbth", "timestamp":"2024-10-09 01:06:41.583524282 +0000 UTC"}, Hostname:"ip-172-31-16-164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.596 [INFO][4677] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.596 [INFO][4677] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.597 [INFO][4677] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-164' Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.599 [INFO][4677] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.609 [INFO][4677] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.625 [INFO][4677] ipam.go 489: Trying affinity for 192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.628 [INFO][4677] ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.637 [INFO][4677] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.637 [INFO][4677] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.640 [INFO][4677] ipam.go 1685: Creating new handle: k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952 Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.649 [INFO][4677] ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.662 [INFO][4677] ipam.go 1216: Successfully claimed IPs: [192.168.56.65/26] block=192.168.56.64/26 handle="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.662 [INFO][4677] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.65/26] handle="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" host="ip-172-31-16-164" Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.662 [INFO][4677] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:41.761575 containerd[1978]: 2024-10-09 01:06:41.662 [INFO][4677] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.56.65/26] IPv6=[] ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" HandleID="k8s-pod-network.c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.764530 containerd[1978]: 2024-10-09 01:06:41.670 [INFO][4665] k8s.go 386: Populated endpoint ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"", Pod:"coredns-7db6d8ff4d-drbth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96d382e19ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:41.764530 containerd[1978]: 2024-10-09 01:06:41.670 [INFO][4665] k8s.go 387: Calico CNI using IPs: [192.168.56.65/32] ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.764530 containerd[1978]: 2024-10-09 01:06:41.670 [INFO][4665] dataplane_linux.go 68: Setting the host side veth name to cali96d382e19ee ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.764530 containerd[1978]: 2024-10-09 01:06:41.705 [INFO][4665] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.764530 containerd[1978]: 2024-10-09 01:06:41.709 [INFO][4665] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952", Pod:"coredns-7db6d8ff4d-drbth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96d382e19ee", MAC:"2e:7b:e1:ab:b5:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:41.764530 containerd[1978]: 2024-10-09 01:06:41.755 [INFO][4665] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-drbth" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:41.793878 kernel: bpftool[4723]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:06:41.839021 containerd[1978]: time="2024-10-09T01:06:41.838521411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:41.839021 containerd[1978]: time="2024-10-09T01:06:41.838617039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:41.839021 containerd[1978]: time="2024-10-09T01:06:41.838635670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.839021 containerd[1978]: time="2024-10-09T01:06:41.838759061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:41.890093 systemd[1]: Started cri-containerd-c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952.scope - libcontainer container c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952. Oct 9 01:06:41.945948 containerd[1978]: time="2024-10-09T01:06:41.945559950Z" level=info msg="StopPodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\"" Oct 9 01:06:42.014032 containerd[1978]: time="2024-10-09T01:06:42.013596068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-drbth,Uid:a46cf12a-2481-4bbd-9cc4-7841ca00f5d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952\"" Oct 9 01:06:42.050919 containerd[1978]: time="2024-10-09T01:06:42.050656550Z" level=info msg="CreateContainer within sandbox \"c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:06:42.121239 containerd[1978]: time="2024-10-09T01:06:42.120447327Z" level=info msg="CreateContainer within sandbox \"c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c20e73169157571ff19ef9d6c212144c60d6dd1e7eda85cedf6d5d4348d5a40b\"" Oct 9 01:06:42.121755 containerd[1978]: time="2024-10-09T01:06:42.121507783Z" level=info msg="StartContainer for \"c20e73169157571ff19ef9d6c212144c60d6dd1e7eda85cedf6d5d4348d5a40b\"" Oct 9 01:06:42.244442 systemd[1]: Started cri-containerd-c20e73169157571ff19ef9d6c212144c60d6dd1e7eda85cedf6d5d4348d5a40b.scope - libcontainer container c20e73169157571ff19ef9d6c212144c60d6dd1e7eda85cedf6d5d4348d5a40b. Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.115 [INFO][4777] k8s.go 608: Cleaning up netns ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.115 [INFO][4777] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" iface="eth0" netns="/var/run/netns/cni-8a147c7a-443e-f50e-7b2f-b51623489a0b" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.115 [INFO][4777] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" iface="eth0" netns="/var/run/netns/cni-8a147c7a-443e-f50e-7b2f-b51623489a0b" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.116 [INFO][4777] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" iface="eth0" netns="/var/run/netns/cni-8a147c7a-443e-f50e-7b2f-b51623489a0b" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.116 [INFO][4777] k8s.go 615: Releasing IP address(es) ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.116 [INFO][4777] utils.go 188: Calico CNI releasing IP address ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.213 [INFO][4790] ipam_plugin.go 417: Releasing address using handleID ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.214 [INFO][4790] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.215 [INFO][4790] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.234 [WARNING][4790] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.234 [INFO][4790] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.238 [INFO][4790] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:42.252234 containerd[1978]: 2024-10-09 01:06:42.243 [INFO][4777] k8s.go 621: Teardown processing complete. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:42.254176 containerd[1978]: time="2024-10-09T01:06:42.253751625Z" level=info msg="TearDown network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" successfully" Oct 9 01:06:42.254176 containerd[1978]: time="2024-10-09T01:06:42.253913612Z" level=info msg="StopPodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" returns successfully" Oct 9 01:06:42.255633 containerd[1978]: time="2024-10-09T01:06:42.255465422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76788c6b87-jqmk4,Uid:d016466e-20dd-4a19-9b78-c0ff4431d047,Namespace:calico-system,Attempt:1,}" Oct 9 01:06:42.316816 containerd[1978]: time="2024-10-09T01:06:42.316632232Z" level=info msg="StartContainer for \"c20e73169157571ff19ef9d6c212144c60d6dd1e7eda85cedf6d5d4348d5a40b\" returns successfully" Oct 9 01:06:42.413578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953224836.mount: Deactivated successfully. Oct 9 01:06:42.413711 systemd[1]: run-netns-cni\x2d8a147c7a\x2d443e\x2df50e\x2d7b2f\x2db51623489a0b.mount: Deactivated successfully. Oct 9 01:06:42.510806 kubelet[3286]: I1009 01:06:42.510658 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-drbth" podStartSLOduration=33.510620489 podStartE2EDuration="33.510620489s" podCreationTimestamp="2024-10-09 01:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:42.509673582 +0000 UTC m=+47.092800545" watchObservedRunningTime="2024-10-09 01:06:42.510620489 +0000 UTC m=+47.093747452" Oct 9 01:06:42.526707 systemd-networkd[1817]: cali732ba5179c2: Link UP Oct 9 01:06:42.528009 systemd-networkd[1817]: cali732ba5179c2: Gained carrier Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.350 [INFO][4822] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0 calico-kube-controllers-76788c6b87- calico-system d016466e-20dd-4a19-9b78-c0ff4431d047 747 0 2024-10-09 01:06:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76788c6b87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-164 calico-kube-controllers-76788c6b87-jqmk4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali732ba5179c2 [] []}} ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.351 [INFO][4822] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.425 [INFO][4838] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" HandleID="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.437 [INFO][4838] ipam_plugin.go 270: Auto assigning IP ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" HandleID="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103e10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-164", "pod":"calico-kube-controllers-76788c6b87-jqmk4", "timestamp":"2024-10-09 01:06:42.425113837 +0000 UTC"}, Hostname:"ip-172-31-16-164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.438 [INFO][4838] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.438 [INFO][4838] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.438 [INFO][4838] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-164' Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.441 [INFO][4838] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.452 [INFO][4838] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.460 [INFO][4838] ipam.go 489: Trying affinity for 192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.469 [INFO][4838] ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.481 [INFO][4838] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.481 [INFO][4838] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.486 [INFO][4838] ipam.go 1685: Creating new handle: k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999 Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.496 [INFO][4838] ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.512 [INFO][4838] ipam.go 1216: Successfully claimed IPs: [192.168.56.66/26] block=192.168.56.64/26 handle="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.513 [INFO][4838] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.66/26] handle="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" host="ip-172-31-16-164" Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.513 [INFO][4838] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:42.585151 containerd[1978]: 2024-10-09 01:06:42.513 [INFO][4838] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.56.66/26] IPv6=[] ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" HandleID="k8s-pod-network.34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.587551 containerd[1978]: 2024-10-09 01:06:42.517 [INFO][4822] k8s.go 386: Populated endpoint ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0", GenerateName:"calico-kube-controllers-76788c6b87-", Namespace:"calico-system", SelfLink:"", UID:"d016466e-20dd-4a19-9b78-c0ff4431d047", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76788c6b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"", Pod:"calico-kube-controllers-76788c6b87-jqmk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali732ba5179c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:42.587551 containerd[1978]: 2024-10-09 01:06:42.517 [INFO][4822] k8s.go 387: Calico CNI using IPs: [192.168.56.66/32] ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.587551 containerd[1978]: 2024-10-09 01:06:42.518 [INFO][4822] dataplane_linux.go 68: Setting the host side veth name to cali732ba5179c2 ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.587551 containerd[1978]: 2024-10-09 01:06:42.527 [INFO][4822] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.587551 containerd[1978]: 2024-10-09 01:06:42.530 [INFO][4822] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0", GenerateName:"calico-kube-controllers-76788c6b87-", Namespace:"calico-system", SelfLink:"", UID:"d016466e-20dd-4a19-9b78-c0ff4431d047", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76788c6b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999", Pod:"calico-kube-controllers-76788c6b87-jqmk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali732ba5179c2", MAC:"aa:3f:71:2b:06:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:42.587551 containerd[1978]: 2024-10-09 01:06:42.580 [INFO][4822] k8s.go 500: Wrote updated endpoint to datastore ContainerID="34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999" Namespace="calico-system" Pod="calico-kube-controllers-76788c6b87-jqmk4" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:42.693043 containerd[1978]: time="2024-10-09T01:06:42.692005053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:42.694070 containerd[1978]: time="2024-10-09T01:06:42.693629298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:42.694420 containerd[1978]: time="2024-10-09T01:06:42.693890972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:42.695203 containerd[1978]: time="2024-10-09T01:06:42.694956472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:42.770233 systemd[1]: Started cri-containerd-34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999.scope - libcontainer container 34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999. Oct 9 01:06:42.963617 systemd-networkd[1817]: vxlan.calico: Link UP Oct 9 01:06:42.963628 systemd-networkd[1817]: vxlan.calico: Gained carrier Oct 9 01:06:43.008135 containerd[1978]: time="2024-10-09T01:06:43.007197300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76788c6b87-jqmk4,Uid:d016466e-20dd-4a19-9b78-c0ff4431d047,Namespace:calico-system,Attempt:1,} returns sandbox id \"34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999\"" Oct 9 01:06:43.010368 containerd[1978]: time="2024-10-09T01:06:43.010295516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:06:43.036466 (udev-worker)[4524]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:06:43.548164 systemd-networkd[1817]: cali96d382e19ee: Gained IPv6LL Oct 9 01:06:43.740092 systemd-networkd[1817]: cali732ba5179c2: Gained IPv6LL Oct 9 01:06:44.572595 systemd-networkd[1817]: vxlan.calico: Gained IPv6LL Oct 9 01:06:44.913304 systemd[1]: Started sshd@7-172.31.16.164:22-147.75.109.163:50496.service - OpenSSH per-connection server daemon (147.75.109.163:50496). Oct 9 01:06:45.225388 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 50496 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:06:45.229952 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:45.240294 systemd-logind[1957]: New session 8 of user core. Oct 9 01:06:45.246079 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:06:45.950426 containerd[1978]: time="2024-10-09T01:06:45.948936600Z" level=info msg="StopPodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\"" Oct 9 01:06:45.950426 containerd[1978]: time="2024-10-09T01:06:45.948938274Z" level=info msg="StopPodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\"" Oct 9 01:06:46.050478 sshd[4983]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:46.065556 systemd-logind[1957]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:06:46.066549 systemd[1]: sshd@7-172.31.16.164:22-147.75.109.163:50496.service: Deactivated successfully. Oct 9 01:06:46.078440 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:06:46.085624 systemd-logind[1957]: Removed session 8. Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.245 [INFO][5027] k8s.go 608: Cleaning up netns ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.246 [INFO][5027] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" iface="eth0" netns="/var/run/netns/cni-b10ac220-5a9d-3b25-fd1e-7a5283d4928a" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.249 [INFO][5027] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" iface="eth0" netns="/var/run/netns/cni-b10ac220-5a9d-3b25-fd1e-7a5283d4928a" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.252 [INFO][5027] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" iface="eth0" netns="/var/run/netns/cni-b10ac220-5a9d-3b25-fd1e-7a5283d4928a" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.252 [INFO][5027] k8s.go 615: Releasing IP address(es) ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.252 [INFO][5027] utils.go 188: Calico CNI releasing IP address ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.414 [INFO][5041] ipam_plugin.go 417: Releasing address using handleID ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.416 [INFO][5041] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.416 [INFO][5041] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.451 [WARNING][5041] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.451 [INFO][5041] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.464 [INFO][5041] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:46.491877 containerd[1978]: 2024-10-09 01:06:46.480 [INFO][5027] k8s.go 621: Teardown processing complete. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:46.498694 containerd[1978]: time="2024-10-09T01:06:46.492453958Z" level=info msg="TearDown network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" successfully" Oct 9 01:06:46.498694 containerd[1978]: time="2024-10-09T01:06:46.492487289Z" level=info msg="StopPodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" returns successfully" Oct 9 01:06:46.498694 containerd[1978]: time="2024-10-09T01:06:46.497574757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-554cf,Uid:f3b7d4d8-eaee-47df-9d20-3c65da15fec6,Namespace:calico-system,Attempt:1,}" Oct 9 01:06:46.504722 systemd[1]: run-netns-cni\x2db10ac220\x2d5a9d\x2d3b25\x2dfd1e\x2d7a5283d4928a.mount: Deactivated successfully. Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.250 [INFO][5023] k8s.go 608: Cleaning up netns ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.250 [INFO][5023] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" iface="eth0" netns="/var/run/netns/cni-a99661f8-7ad0-8e6e-9502-45ec05fe57b5" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.252 [INFO][5023] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" iface="eth0" netns="/var/run/netns/cni-a99661f8-7ad0-8e6e-9502-45ec05fe57b5" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.256 [INFO][5023] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" iface="eth0" netns="/var/run/netns/cni-a99661f8-7ad0-8e6e-9502-45ec05fe57b5" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.256 [INFO][5023] k8s.go 615: Releasing IP address(es) ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.257 [INFO][5023] utils.go 188: Calico CNI releasing IP address ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.419 [INFO][5042] ipam_plugin.go 417: Releasing address using handleID ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.420 [INFO][5042] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.465 [INFO][5042] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.510 [WARNING][5042] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.510 [INFO][5042] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.518 [INFO][5042] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:46.555899 containerd[1978]: 2024-10-09 01:06:46.527 [INFO][5023] k8s.go 621: Teardown processing complete. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:46.555899 containerd[1978]: time="2024-10-09T01:06:46.539119450Z" level=info msg="TearDown network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" successfully" Oct 9 01:06:46.555899 containerd[1978]: time="2024-10-09T01:06:46.539158244Z" level=info msg="StopPodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" returns successfully" Oct 9 01:06:46.555899 containerd[1978]: time="2024-10-09T01:06:46.540736429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsnst,Uid:3e699c3b-f10c-4efc-8adb-92aa9bdb1e47,Namespace:kube-system,Attempt:1,}" Oct 9 01:06:46.557635 systemd[1]: run-netns-cni\x2da99661f8\x2d7ad0\x2d8e6e\x2d9502\x2d45ec05fe57b5.mount: Deactivated successfully. Oct 9 01:06:47.102283 systemd-networkd[1817]: cali1dfdec4017e: Link UP Oct 9 01:06:47.103749 systemd-networkd[1817]: cali1dfdec4017e: Gained carrier Oct 9 01:06:47.109379 (udev-worker)[5093]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.731 [INFO][5065] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0 coredns-7db6d8ff4d- kube-system 3e699c3b-f10c-4efc-8adb-92aa9bdb1e47 812 0 2024-10-09 01:06:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-164 coredns-7db6d8ff4d-gsnst eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1dfdec4017e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.732 [INFO][5065] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.894 [INFO][5078] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" HandleID="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.916 [INFO][5078] ipam_plugin.go 270: Auto assigning IP ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" HandleID="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000115d90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-164", "pod":"coredns-7db6d8ff4d-gsnst", "timestamp":"2024-10-09 01:06:46.893766505 +0000 UTC"}, Hostname:"ip-172-31-16-164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.916 [INFO][5078] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.916 [INFO][5078] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.916 [INFO][5078] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-164' Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.924 [INFO][5078] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.936 [INFO][5078] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.956 [INFO][5078] ipam.go 489: Trying affinity for 192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.966 [INFO][5078] ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.979 [INFO][5078] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:46.979 [INFO][5078] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:47.009 [INFO][5078] ipam.go 1685: Creating new handle: k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81 Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:47.047 [INFO][5078] ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:47.075 [INFO][5078] ipam.go 1216: Successfully claimed IPs: [192.168.56.67/26] block=192.168.56.64/26 handle="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:47.076 [INFO][5078] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.67/26] handle="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" host="ip-172-31-16-164" Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:47.076 [INFO][5078] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:47.152429 containerd[1978]: 2024-10-09 01:06:47.076 [INFO][5078] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.56.67/26] IPv6=[] ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" HandleID="k8s-pod-network.c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.153727 containerd[1978]: 2024-10-09 01:06:47.085 [INFO][5065] k8s.go 386: Populated endpoint ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"", Pod:"coredns-7db6d8ff4d-gsnst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1dfdec4017e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:47.153727 containerd[1978]: 2024-10-09 01:06:47.086 [INFO][5065] k8s.go 387: Calico CNI using IPs: [192.168.56.67/32] ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.153727 containerd[1978]: 2024-10-09 01:06:47.086 [INFO][5065] dataplane_linux.go 68: Setting the host side veth name to cali1dfdec4017e ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.153727 containerd[1978]: 2024-10-09 01:06:47.104 [INFO][5065] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.153727 containerd[1978]: 2024-10-09 01:06:47.108 [INFO][5065] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81", Pod:"coredns-7db6d8ff4d-gsnst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1dfdec4017e", MAC:"0a:bd:e1:49:96:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:47.153727 containerd[1978]: 2024-10-09 01:06:47.144 [INFO][5065] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gsnst" WorkloadEndpoint="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:47.259123 (udev-worker)[5097]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:06:47.263484 systemd-networkd[1817]: cali457692cc1f1: Link UP Oct 9 01:06:47.271276 systemd-networkd[1817]: cali457692cc1f1: Gained carrier Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:46.751 [INFO][5053] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0 csi-node-driver- calico-system f3b7d4d8-eaee-47df-9d20-3c65da15fec6 811 0 2024-10-09 01:06:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-16-164 csi-node-driver-554cf eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali457692cc1f1 [] []}} ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:46.751 [INFO][5053] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:46.951 [INFO][5082] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" HandleID="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.056 [INFO][5082] ipam_plugin.go 270: Auto assigning IP ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" HandleID="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319070), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-164", "pod":"csi-node-driver-554cf", "timestamp":"2024-10-09 01:06:46.948714937 +0000 UTC"}, Hostname:"ip-172-31-16-164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.056 [INFO][5082] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.077 [INFO][5082] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.077 [INFO][5082] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-164' Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.083 [INFO][5082] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.132 [INFO][5082] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.147 [INFO][5082] ipam.go 489: Trying affinity for 192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.152 [INFO][5082] ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.167 [INFO][5082] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.168 [INFO][5082] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.179 [INFO][5082] ipam.go 1685: Creating new handle: k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380 Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.199 [INFO][5082] ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.226 [INFO][5082] ipam.go 1216: Successfully claimed IPs: [192.168.56.68/26] block=192.168.56.64/26 handle="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.226 [INFO][5082] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.68/26] handle="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" host="ip-172-31-16-164" Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.226 [INFO][5082] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:47.314461 containerd[1978]: 2024-10-09 01:06:47.226 [INFO][5082] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.56.68/26] IPv6=[] ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" HandleID="k8s-pod-network.8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.315777 containerd[1978]: 2024-10-09 01:06:47.253 [INFO][5053] k8s.go 386: Populated endpoint ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3b7d4d8-eaee-47df-9d20-3c65da15fec6", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"", Pod:"csi-node-driver-554cf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali457692cc1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:47.315777 containerd[1978]: 2024-10-09 01:06:47.253 [INFO][5053] k8s.go 387: Calico CNI using IPs: [192.168.56.68/32] ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.315777 containerd[1978]: 2024-10-09 01:06:47.253 [INFO][5053] dataplane_linux.go 68: Setting the host side veth name to cali457692cc1f1 ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.315777 containerd[1978]: 2024-10-09 01:06:47.265 [INFO][5053] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.315777 containerd[1978]: 2024-10-09 01:06:47.268 [INFO][5053] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3b7d4d8-eaee-47df-9d20-3c65da15fec6", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380", Pod:"csi-node-driver-554cf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali457692cc1f1", MAC:"0a:46:b8:44:f1:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:47.315777 containerd[1978]: 2024-10-09 01:06:47.302 [INFO][5053] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380" Namespace="calico-system" Pod="csi-node-driver-554cf" WorkloadEndpoint="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:47.321521 kubelet[3286]: I1009 01:06:47.321170 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:06:47.402975 containerd[1978]: time="2024-10-09T01:06:47.397573442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:47.402975 containerd[1978]: time="2024-10-09T01:06:47.400085385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:47.404430 containerd[1978]: time="2024-10-09T01:06:47.403469632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:47.406976 containerd[1978]: time="2024-10-09T01:06:47.403970248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:47.514407 containerd[1978]: time="2024-10-09T01:06:47.514351979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:47.518867 containerd[1978]: time="2024-10-09T01:06:47.518687990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:06:47.523397 containerd[1978]: time="2024-10-09T01:06:47.521469526Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:47.528483 containerd[1978]: time="2024-10-09T01:06:47.528438400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:47.532339 containerd[1978]: time="2024-10-09T01:06:47.532276305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 4.521865815s" Oct 9 01:06:47.532968 containerd[1978]: time="2024-10-09T01:06:47.532937553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:06:47.559059 systemd[1]: Started cri-containerd-c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81.scope - libcontainer container c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81. Oct 9 01:06:47.610325 containerd[1978]: time="2024-10-09T01:06:47.608201481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:47.610325 containerd[1978]: time="2024-10-09T01:06:47.608267424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:47.610325 containerd[1978]: time="2024-10-09T01:06:47.608284445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:47.610325 containerd[1978]: time="2024-10-09T01:06:47.608386692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:47.626977 containerd[1978]: time="2024-10-09T01:06:47.623581143Z" level=info msg="CreateContainer within sandbox \"34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:06:47.672657 systemd[1]: Started cri-containerd-8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380.scope - libcontainer container 8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380. Oct 9 01:06:47.707132 containerd[1978]: time="2024-10-09T01:06:47.706622224Z" level=info msg="CreateContainer within sandbox \"34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7e280a1c19cf4c1621b0b315c40c9d1bf799ca4024cb8216b994d8c904e91fcc\"" Oct 9 01:06:47.708571 containerd[1978]: time="2024-10-09T01:06:47.708532133Z" level=info msg="StartContainer for \"7e280a1c19cf4c1621b0b315c40c9d1bf799ca4024cb8216b994d8c904e91fcc\"" Oct 9 01:06:47.778111 systemd[1]: Started cri-containerd-7e280a1c19cf4c1621b0b315c40c9d1bf799ca4024cb8216b994d8c904e91fcc.scope - libcontainer container 7e280a1c19cf4c1621b0b315c40c9d1bf799ca4024cb8216b994d8c904e91fcc. Oct 9 01:06:47.845939 containerd[1978]: time="2024-10-09T01:06:47.845895213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-554cf,Uid:f3b7d4d8-eaee-47df-9d20-3c65da15fec6,Namespace:calico-system,Attempt:1,} returns sandbox id \"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380\"" Oct 9 01:06:47.849252 containerd[1978]: time="2024-10-09T01:06:47.849210606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsnst,Uid:3e699c3b-f10c-4efc-8adb-92aa9bdb1e47,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81\"" Oct 9 01:06:47.853816 containerd[1978]: time="2024-10-09T01:06:47.853018745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:06:47.856097 containerd[1978]: time="2024-10-09T01:06:47.855440596Z" level=info msg="CreateContainer within sandbox \"c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:06:47.914013 containerd[1978]: time="2024-10-09T01:06:47.913964133Z" level=info msg="CreateContainer within sandbox \"c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbc3ac8ae7003f76c9d4b9cf7c81280fa89e306b3a52a00d191deb6f1f2ae1ca\"" Oct 9 01:06:47.915581 containerd[1978]: time="2024-10-09T01:06:47.915541670Z" level=info msg="StartContainer for \"bbc3ac8ae7003f76c9d4b9cf7c81280fa89e306b3a52a00d191deb6f1f2ae1ca\"" Oct 9 01:06:48.064580 systemd[1]: Started cri-containerd-bbc3ac8ae7003f76c9d4b9cf7c81280fa89e306b3a52a00d191deb6f1f2ae1ca.scope - libcontainer container bbc3ac8ae7003f76c9d4b9cf7c81280fa89e306b3a52a00d191deb6f1f2ae1ca. Oct 9 01:06:48.090128 containerd[1978]: time="2024-10-09T01:06:48.089717112Z" level=info msg="StartContainer for \"7e280a1c19cf4c1621b0b315c40c9d1bf799ca4024cb8216b994d8c904e91fcc\" returns successfully" Oct 9 01:06:48.190132 containerd[1978]: time="2024-10-09T01:06:48.189064386Z" level=info msg="StartContainer for \"bbc3ac8ae7003f76c9d4b9cf7c81280fa89e306b3a52a00d191deb6f1f2ae1ca\" returns successfully" Oct 9 01:06:48.540772 systemd-networkd[1817]: cali1dfdec4017e: Gained IPv6LL Oct 9 01:06:48.571870 kubelet[3286]: I1009 01:06:48.568640 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76788c6b87-jqmk4" podStartSLOduration=27.025219671 podStartE2EDuration="31.568620472s" podCreationTimestamp="2024-10-09 01:06:17 +0000 UTC" firstStartedPulling="2024-10-09 01:06:43.00959193 +0000 UTC m=+47.592718880" lastFinishedPulling="2024-10-09 01:06:47.552992726 +0000 UTC m=+52.136119681" observedRunningTime="2024-10-09 01:06:48.566043906 +0000 UTC m=+53.149171073" watchObservedRunningTime="2024-10-09 01:06:48.568620472 +0000 UTC m=+53.151747435" Oct 9 01:06:48.633104 kubelet[3286]: I1009 01:06:48.630472 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gsnst" podStartSLOduration=39.63041626 podStartE2EDuration="39.63041626s" podCreationTimestamp="2024-10-09 01:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:48.629858697 +0000 UTC m=+53.212985662" watchObservedRunningTime="2024-10-09 01:06:48.63041626 +0000 UTC m=+53.213543225" Oct 9 01:06:48.797039 systemd-networkd[1817]: cali457692cc1f1: Gained IPv6LL Oct 9 01:06:49.515212 containerd[1978]: time="2024-10-09T01:06:49.514227023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:49.518151 containerd[1978]: time="2024-10-09T01:06:49.518096694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:06:49.519716 containerd[1978]: time="2024-10-09T01:06:49.519680759Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:49.523546 containerd[1978]: time="2024-10-09T01:06:49.523508439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:49.525040 containerd[1978]: time="2024-10-09T01:06:49.525008842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.671943692s" Oct 9 01:06:49.525576 containerd[1978]: time="2024-10-09T01:06:49.525551250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:06:49.529682 containerd[1978]: time="2024-10-09T01:06:49.529266523Z" level=info msg="CreateContainer within sandbox \"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:06:49.575011 containerd[1978]: time="2024-10-09T01:06:49.574741907Z" level=info msg="CreateContainer within sandbox \"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c6f016e986f218a3e9c8aeda09e82af30ff26e28aa7d791aede93183b6822271\"" Oct 9 01:06:49.576183 containerd[1978]: time="2024-10-09T01:06:49.576150155Z" level=info msg="StartContainer for \"c6f016e986f218a3e9c8aeda09e82af30ff26e28aa7d791aede93183b6822271\"" Oct 9 01:06:49.643097 systemd[1]: Started cri-containerd-c6f016e986f218a3e9c8aeda09e82af30ff26e28aa7d791aede93183b6822271.scope - libcontainer container c6f016e986f218a3e9c8aeda09e82af30ff26e28aa7d791aede93183b6822271. Oct 9 01:06:49.725315 containerd[1978]: time="2024-10-09T01:06:49.725170246Z" level=info msg="StartContainer for \"c6f016e986f218a3e9c8aeda09e82af30ff26e28aa7d791aede93183b6822271\" returns successfully" Oct 9 01:06:49.728444 containerd[1978]: time="2024-10-09T01:06:49.728194254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:06:51.089492 systemd[1]: Started sshd@8-172.31.16.164:22-147.75.109.163:52138.service - OpenSSH per-connection server daemon (147.75.109.163:52138). Oct 9 01:06:51.349060 sshd[5396]: Accepted publickey for core from 147.75.109.163 port 52138 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:06:51.355489 sshd[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:51.380234 systemd-logind[1957]: New session 9 of user core. Oct 9 01:06:51.391103 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:06:51.564707 ntpd[1949]: Listen normally on 7 vxlan.calico 192.168.56.64:123 Oct 9 01:06:51.565032 ntpd[1949]: Listen normally on 8 cali96d382e19ee [fe80::ecee:eeff:feee:eeee%4]:123 Oct 9 01:06:51.565286 ntpd[1949]: 9 Oct 01:06:51 ntpd[1949]: Listen normally on 7 vxlan.calico 192.168.56.64:123 Oct 9 01:06:51.565286 ntpd[1949]: 9 Oct 01:06:51 ntpd[1949]: Listen normally on 8 cali96d382e19ee [fe80::ecee:eeff:feee:eeee%4]:123 Oct 9 01:06:51.565619 ntpd[1949]: Listen normally on 9 cali732ba5179c2 [fe80::ecee:eeff:feee:eeee%5]:123 Oct 9 01:06:51.570979 ntpd[1949]: 9 Oct 01:06:51 ntpd[1949]: Listen normally on 9 cali732ba5179c2 [fe80::ecee:eeff:feee:eeee%5]:123 Oct 9 01:06:51.570979 ntpd[1949]: 9 Oct 01:06:51 ntpd[1949]: Listen normally on 10 vxlan.calico [fe80::641c:66ff:fe1a:1b0d%6]:123 Oct 9 01:06:51.570979 ntpd[1949]: 9 Oct 01:06:51 ntpd[1949]: Listen normally on 11 cali1dfdec4017e [fe80::ecee:eeff:feee:eeee%9]:123 Oct 9 01:06:51.570979 ntpd[1949]: 9 Oct 01:06:51 ntpd[1949]: Listen normally on 12 cali457692cc1f1 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 9 01:06:51.565754 ntpd[1949]: Listen normally on 10 vxlan.calico [fe80::641c:66ff:fe1a:1b0d%6]:123 Oct 9 01:06:51.565846 ntpd[1949]: Listen normally on 11 cali1dfdec4017e [fe80::ecee:eeff:feee:eeee%9]:123 Oct 9 01:06:51.565892 ntpd[1949]: Listen normally on 12 cali457692cc1f1 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 9 01:06:52.198724 containerd[1978]: time="2024-10-09T01:06:52.197686302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:52.200654 containerd[1978]: time="2024-10-09T01:06:52.200601898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:06:52.202378 containerd[1978]: time="2024-10-09T01:06:52.202279233Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:52.209136 containerd[1978]: time="2024-10-09T01:06:52.209070166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:52.209934 containerd[1978]: time="2024-10-09T01:06:52.209872193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.481524689s" Oct 9 01:06:52.210448 containerd[1978]: time="2024-10-09T01:06:52.210079381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:06:52.212910 sshd[5396]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:52.223339 systemd[1]: sshd@8-172.31.16.164:22-147.75.109.163:52138.service: Deactivated successfully. Oct 9 01:06:52.228058 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:06:52.240909 systemd-logind[1957]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:06:52.244375 systemd-logind[1957]: Removed session 9. Oct 9 01:06:52.258369 containerd[1978]: time="2024-10-09T01:06:52.258311320Z" level=info msg="CreateContainer within sandbox \"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:06:52.296155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986728022.mount: Deactivated successfully. Oct 9 01:06:52.300776 containerd[1978]: time="2024-10-09T01:06:52.300477876Z" level=info msg="CreateContainer within sandbox \"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e203850f016c8a6bb082db2a5549ed8252ac7a2a6c4115aba02bd69bff198af2\"" Oct 9 01:06:52.301651 containerd[1978]: time="2024-10-09T01:06:52.301544158Z" level=info msg="StartContainer for \"e203850f016c8a6bb082db2a5549ed8252ac7a2a6c4115aba02bd69bff198af2\"" Oct 9 01:06:52.379661 systemd[1]: Started cri-containerd-e203850f016c8a6bb082db2a5549ed8252ac7a2a6c4115aba02bd69bff198af2.scope - libcontainer container e203850f016c8a6bb082db2a5549ed8252ac7a2a6c4115aba02bd69bff198af2. Oct 9 01:06:52.469356 containerd[1978]: time="2024-10-09T01:06:52.468919459Z" level=info msg="StartContainer for \"e203850f016c8a6bb082db2a5549ed8252ac7a2a6c4115aba02bd69bff198af2\" returns successfully" Oct 9 01:06:53.475410 kubelet[3286]: I1009 01:06:53.474240 3286 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:06:53.482451 kubelet[3286]: I1009 01:06:53.482412 3286 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:06:55.836599 containerd[1978]: time="2024-10-09T01:06:55.836451358Z" level=info msg="StopPodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\"" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.891 [WARNING][5475] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0", GenerateName:"calico-kube-controllers-76788c6b87-", Namespace:"calico-system", SelfLink:"", UID:"d016466e-20dd-4a19-9b78-c0ff4431d047", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76788c6b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999", Pod:"calico-kube-controllers-76788c6b87-jqmk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali732ba5179c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.891 [INFO][5475] k8s.go 608: Cleaning up netns ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.891 [INFO][5475] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" iface="eth0" netns="" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.891 [INFO][5475] k8s.go 615: Releasing IP address(es) ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.891 [INFO][5475] utils.go 188: Calico CNI releasing IP address ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.920 [INFO][5481] ipam_plugin.go 417: Releasing address using handleID ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.920 [INFO][5481] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.920 [INFO][5481] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.927 [WARNING][5481] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.927 [INFO][5481] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.929 [INFO][5481] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:55.934044 containerd[1978]: 2024-10-09 01:06:55.931 [INFO][5475] k8s.go 621: Teardown processing complete. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:55.934846 containerd[1978]: time="2024-10-09T01:06:55.934093090Z" level=info msg="TearDown network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" successfully" Oct 9 01:06:55.934846 containerd[1978]: time="2024-10-09T01:06:55.934121275Z" level=info msg="StopPodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" returns successfully" Oct 9 01:06:55.935198 containerd[1978]: time="2024-10-09T01:06:55.935139658Z" level=info msg="RemovePodSandbox for \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\"" Oct 9 01:06:55.949585 containerd[1978]: time="2024-10-09T01:06:55.949540584Z" level=info msg="Forcibly stopping sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\"" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.010 [WARNING][5500] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0", GenerateName:"calico-kube-controllers-76788c6b87-", Namespace:"calico-system", SelfLink:"", UID:"d016466e-20dd-4a19-9b78-c0ff4431d047", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76788c6b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"34905e616c529816de976c2ad7a974054a418754ec6ef8728594d720a6607999", Pod:"calico-kube-controllers-76788c6b87-jqmk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali732ba5179c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.014 [INFO][5500] k8s.go 608: Cleaning up netns ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.014 [INFO][5500] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" iface="eth0" netns="" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.014 [INFO][5500] k8s.go 615: Releasing IP address(es) ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.014 [INFO][5500] utils.go 188: Calico CNI releasing IP address ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.050 [INFO][5507] ipam_plugin.go 417: Releasing address using handleID ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.050 [INFO][5507] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.050 [INFO][5507] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.059 [WARNING][5507] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.059 [INFO][5507] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" HandleID="k8s-pod-network.7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Workload="ip--172--31--16--164-k8s-calico--kube--controllers--76788c6b87--jqmk4-eth0" Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.062 [INFO][5507] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.066463 containerd[1978]: 2024-10-09 01:06:56.064 [INFO][5500] k8s.go 621: Teardown processing complete. ContainerID="7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306" Oct 9 01:06:56.067984 containerd[1978]: time="2024-10-09T01:06:56.066947499Z" level=info msg="TearDown network for sandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" successfully" Oct 9 01:06:56.078548 containerd[1978]: time="2024-10-09T01:06:56.078494274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:06:56.078687 containerd[1978]: time="2024-10-09T01:06:56.078576303Z" level=info msg="RemovePodSandbox \"7a08ee4cac52322e25275f03713fd354bdd40bf6abfc42d21e75687bcfd7b306\" returns successfully" Oct 9 01:06:56.079415 containerd[1978]: time="2024-10-09T01:06:56.079381283Z" level=info msg="StopPodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\"" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.131 [WARNING][5526] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3b7d4d8-eaee-47df-9d20-3c65da15fec6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380", Pod:"csi-node-driver-554cf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali457692cc1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.132 [INFO][5526] k8s.go 608: Cleaning up netns ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.132 [INFO][5526] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" iface="eth0" netns="" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.132 [INFO][5526] k8s.go 615: Releasing IP address(es) ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.132 [INFO][5526] utils.go 188: Calico CNI releasing IP address ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.165 [INFO][5532] ipam_plugin.go 417: Releasing address using handleID ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.165 [INFO][5532] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.165 [INFO][5532] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.172 [WARNING][5532] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.172 [INFO][5532] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.174 [INFO][5532] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.178682 containerd[1978]: 2024-10-09 01:06:56.176 [INFO][5526] k8s.go 621: Teardown processing complete. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.178682 containerd[1978]: time="2024-10-09T01:06:56.178458532Z" level=info msg="TearDown network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" successfully" Oct 9 01:06:56.178682 containerd[1978]: time="2024-10-09T01:06:56.178499546Z" level=info msg="StopPodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" returns successfully" Oct 9 01:06:56.181345 containerd[1978]: time="2024-10-09T01:06:56.181297461Z" level=info msg="RemovePodSandbox for \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\"" Oct 9 01:06:56.181429 containerd[1978]: time="2024-10-09T01:06:56.181351295Z" level=info msg="Forcibly stopping sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\"" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.228 [WARNING][5550] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3b7d4d8-eaee-47df-9d20-3c65da15fec6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"8737f6a3015519e21eb038ce1b7c3aa10e54ce8d009f9448a5494ae2fb2f3380", Pod:"csi-node-driver-554cf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali457692cc1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.229 [INFO][5550] k8s.go 608: Cleaning up netns ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.229 [INFO][5550] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" iface="eth0" netns="" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.229 [INFO][5550] k8s.go 615: Releasing IP address(es) ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.229 [INFO][5550] utils.go 188: Calico CNI releasing IP address ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.258 [INFO][5556] ipam_plugin.go 417: Releasing address using handleID ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.259 [INFO][5556] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.259 [INFO][5556] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.275 [WARNING][5556] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.275 [INFO][5556] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" HandleID="k8s-pod-network.c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Workload="ip--172--31--16--164-k8s-csi--node--driver--554cf-eth0" Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.277 [INFO][5556] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.289886 containerd[1978]: 2024-10-09 01:06:56.282 [INFO][5550] k8s.go 621: Teardown processing complete. ContainerID="c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5" Oct 9 01:06:56.289886 containerd[1978]: time="2024-10-09T01:06:56.287060541Z" level=info msg="TearDown network for sandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" successfully" Oct 9 01:06:56.295593 containerd[1978]: time="2024-10-09T01:06:56.295537496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:06:56.296662 containerd[1978]: time="2024-10-09T01:06:56.296622184Z" level=info msg="RemovePodSandbox \"c1a03232ca686c47a1239cfc898439f66b9a5a02aefa5690ed9880a2bad3cba5\" returns successfully" Oct 9 01:06:56.299170 containerd[1978]: time="2024-10-09T01:06:56.298744984Z" level=info msg="StopPodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\"" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.449 [WARNING][5574] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952", Pod:"coredns-7db6d8ff4d-drbth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96d382e19ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.451 [INFO][5574] k8s.go 608: Cleaning up netns ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.451 [INFO][5574] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" iface="eth0" netns="" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.451 [INFO][5574] k8s.go 615: Releasing IP address(es) ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.451 [INFO][5574] utils.go 188: Calico CNI releasing IP address ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.499 [INFO][5581] ipam_plugin.go 417: Releasing address using handleID ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.499 [INFO][5581] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.499 [INFO][5581] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.509 [WARNING][5581] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.509 [INFO][5581] ipam_plugin.go 445: Releasing address using workloadID ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.510 [INFO][5581] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.514971 containerd[1978]: 2024-10-09 01:06:56.512 [INFO][5574] k8s.go 621: Teardown processing complete. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.514971 containerd[1978]: time="2024-10-09T01:06:56.514612766Z" level=info msg="TearDown network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" successfully" Oct 9 01:06:56.514971 containerd[1978]: time="2024-10-09T01:06:56.514643059Z" level=info msg="StopPodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" returns successfully" Oct 9 01:06:56.519061 containerd[1978]: time="2024-10-09T01:06:56.516621667Z" level=info msg="RemovePodSandbox for \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\"" Oct 9 01:06:56.519061 containerd[1978]: time="2024-10-09T01:06:56.516663041Z" level=info msg="Forcibly stopping sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\"" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.596 [WARNING][5600] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a46cf12a-2481-4bbd-9cc4-7841ca00f5d0", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"c9e2cd44f82c523105a8b4a337f7f892471d040e693f9c7e9dc9f3578a9cd952", Pod:"coredns-7db6d8ff4d-drbth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96d382e19ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.597 [INFO][5600] k8s.go 608: Cleaning up netns ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.597 [INFO][5600] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" iface="eth0" netns="" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.597 [INFO][5600] k8s.go 615: Releasing IP address(es) ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.597 [INFO][5600] utils.go 188: Calico CNI releasing IP address ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.639 [INFO][5607] ipam_plugin.go 417: Releasing address using handleID ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.639 [INFO][5607] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.639 [INFO][5607] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.646 [WARNING][5607] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.646 [INFO][5607] ipam_plugin.go 445: Releasing address using workloadID ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" HandleID="k8s-pod-network.66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--drbth-eth0" Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.648 [INFO][5607] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.653663 containerd[1978]: 2024-10-09 01:06:56.650 [INFO][5600] k8s.go 621: Teardown processing complete. ContainerID="66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee" Oct 9 01:06:56.655508 containerd[1978]: time="2024-10-09T01:06:56.653703013Z" level=info msg="TearDown network for sandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" successfully" Oct 9 01:06:56.659396 containerd[1978]: time="2024-10-09T01:06:56.659339104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:06:56.659533 containerd[1978]: time="2024-10-09T01:06:56.659423862Z" level=info msg="RemovePodSandbox \"66cf23dea225abf84ed656e1c243e7001611cc2dd763a1085156590ed218d2ee\" returns successfully" Oct 9 01:06:56.660085 containerd[1978]: time="2024-10-09T01:06:56.660053943Z" level=info msg="StopPodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\"" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.719 [WARNING][5625] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81", Pod:"coredns-7db6d8ff4d-gsnst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1dfdec4017e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.720 [INFO][5625] k8s.go 608: Cleaning up netns ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.720 [INFO][5625] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" iface="eth0" netns="" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.720 [INFO][5625] k8s.go 615: Releasing IP address(es) ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.720 [INFO][5625] utils.go 188: Calico CNI releasing IP address ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.749 [INFO][5631] ipam_plugin.go 417: Releasing address using handleID ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.749 [INFO][5631] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.749 [INFO][5631] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.756 [WARNING][5631] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.756 [INFO][5631] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.758 [INFO][5631] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.762682 containerd[1978]: 2024-10-09 01:06:56.760 [INFO][5625] k8s.go 621: Teardown processing complete. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.763915 containerd[1978]: time="2024-10-09T01:06:56.762734174Z" level=info msg="TearDown network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" successfully" Oct 9 01:06:56.763915 containerd[1978]: time="2024-10-09T01:06:56.762762001Z" level=info msg="StopPodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" returns successfully" Oct 9 01:06:56.763915 containerd[1978]: time="2024-10-09T01:06:56.763559301Z" level=info msg="RemovePodSandbox for \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\"" Oct 9 01:06:56.763915 containerd[1978]: time="2024-10-09T01:06:56.763594821Z" level=info msg="Forcibly stopping sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\"" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.815 [WARNING][5649] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3e699c3b-f10c-4efc-8adb-92aa9bdb1e47", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"c0350a559568989ed674688ec7f956cceaa2aa256d9aa2e37ab4f1e81e27fb81", Pod:"coredns-7db6d8ff4d-gsnst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1dfdec4017e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.815 [INFO][5649] k8s.go 608: Cleaning up netns ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.815 [INFO][5649] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" iface="eth0" netns="" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.816 [INFO][5649] k8s.go 615: Releasing IP address(es) ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.816 [INFO][5649] utils.go 188: Calico CNI releasing IP address ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.849 [INFO][5656] ipam_plugin.go 417: Releasing address using handleID ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.849 [INFO][5656] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.849 [INFO][5656] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.858 [WARNING][5656] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.858 [INFO][5656] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" HandleID="k8s-pod-network.9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Workload="ip--172--31--16--164-k8s-coredns--7db6d8ff4d--gsnst-eth0" Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.860 [INFO][5656] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:06:56.864082 containerd[1978]: 2024-10-09 01:06:56.862 [INFO][5649] k8s.go 621: Teardown processing complete. ContainerID="9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898" Oct 9 01:06:56.864965 containerd[1978]: time="2024-10-09T01:06:56.864918677Z" level=info msg="TearDown network for sandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" successfully" Oct 9 01:06:56.896724 containerd[1978]: time="2024-10-09T01:06:56.896665536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:06:56.896888 containerd[1978]: time="2024-10-09T01:06:56.896756729Z" level=info msg="RemovePodSandbox \"9911ac37f1f6f560520c1d073cdf80dd35cf957addf4fb0a8582c724a55a1898\" returns successfully" Oct 9 01:06:57.251973 systemd[1]: Started sshd@9-172.31.16.164:22-147.75.109.163:43938.service - OpenSSH per-connection server daemon (147.75.109.163:43938). Oct 9 01:06:57.426694 sshd[5665]: Accepted publickey for core from 147.75.109.163 port 43938 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:06:57.430341 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:57.440330 systemd-logind[1957]: New session 10 of user core. Oct 9 01:06:57.448432 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:06:57.753167 sshd[5665]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:57.760528 systemd[1]: sshd@9-172.31.16.164:22-147.75.109.163:43938.service: Deactivated successfully. Oct 9 01:06:57.763396 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:06:57.764810 systemd-logind[1957]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:06:57.766334 systemd-logind[1957]: Removed session 10. Oct 9 01:07:02.790246 systemd[1]: Started sshd@10-172.31.16.164:22-147.75.109.163:43948.service - OpenSSH per-connection server daemon (147.75.109.163:43948). Oct 9 01:07:02.960494 sshd[5700]: Accepted publickey for core from 147.75.109.163 port 43948 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:02.962542 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:02.968893 systemd-logind[1957]: New session 11 of user core. Oct 9 01:07:02.974045 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:07:03.294225 sshd[5700]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:03.298727 systemd[1]: sshd@10-172.31.16.164:22-147.75.109.163:43948.service: Deactivated successfully. Oct 9 01:07:03.304024 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:07:03.308206 systemd-logind[1957]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:07:03.310049 systemd-logind[1957]: Removed session 11. Oct 9 01:07:03.330151 systemd[1]: Started sshd@11-172.31.16.164:22-147.75.109.163:43950.service - OpenSSH per-connection server daemon (147.75.109.163:43950). Oct 9 01:07:03.498380 sshd[5719]: Accepted publickey for core from 147.75.109.163 port 43950 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:03.501522 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:03.508099 systemd-logind[1957]: New session 12 of user core. Oct 9 01:07:03.514089 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:07:03.816357 sshd[5719]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:03.831312 systemd-logind[1957]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:07:03.833483 systemd[1]: sshd@11-172.31.16.164:22-147.75.109.163:43950.service: Deactivated successfully. Oct 9 01:07:03.840716 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:07:03.868244 systemd[1]: Started sshd@12-172.31.16.164:22-147.75.109.163:43960.service - OpenSSH per-connection server daemon (147.75.109.163:43960). Oct 9 01:07:03.873927 systemd-logind[1957]: Removed session 12. Oct 9 01:07:04.054496 sshd[5736]: Accepted publickey for core from 147.75.109.163 port 43960 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:04.057233 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:04.065502 systemd-logind[1957]: New session 13 of user core. Oct 9 01:07:04.070049 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:07:04.451621 sshd[5736]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:04.456592 systemd[1]: sshd@12-172.31.16.164:22-147.75.109.163:43960.service: Deactivated successfully. Oct 9 01:07:04.459120 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:07:04.460273 systemd-logind[1957]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:07:04.462147 systemd-logind[1957]: Removed session 13. Oct 9 01:07:09.489465 systemd[1]: Started sshd@13-172.31.16.164:22-147.75.109.163:42256.service - OpenSSH per-connection server daemon (147.75.109.163:42256). Oct 9 01:07:09.661611 sshd[5754]: Accepted publickey for core from 147.75.109.163 port 42256 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:09.663434 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:09.669461 systemd-logind[1957]: New session 14 of user core. Oct 9 01:07:09.675132 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:07:09.921920 sshd[5754]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:09.928025 systemd[1]: sshd@13-172.31.16.164:22-147.75.109.163:42256.service: Deactivated successfully. Oct 9 01:07:09.930799 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:07:09.932123 systemd-logind[1957]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:07:09.934414 systemd-logind[1957]: Removed session 14. Oct 9 01:07:14.965522 systemd[1]: Started sshd@14-172.31.16.164:22-147.75.109.163:42268.service - OpenSSH per-connection server daemon (147.75.109.163:42268). Oct 9 01:07:15.174730 sshd[5775]: Accepted publickey for core from 147.75.109.163 port 42268 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:15.181356 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:15.187547 systemd-logind[1957]: New session 15 of user core. Oct 9 01:07:15.197122 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:07:15.475699 sshd[5775]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:15.482228 systemd[1]: sshd@14-172.31.16.164:22-147.75.109.163:42268.service: Deactivated successfully. Oct 9 01:07:15.485735 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:07:15.486792 systemd-logind[1957]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:07:15.488737 systemd-logind[1957]: Removed session 15. Oct 9 01:07:17.356626 systemd[1]: run-containerd-runc-k8s.io-ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe-runc.sHaBcE.mount: Deactivated successfully. Oct 9 01:07:20.518290 systemd[1]: Started sshd@15-172.31.16.164:22-147.75.109.163:34576.service - OpenSSH per-connection server daemon (147.75.109.163:34576). Oct 9 01:07:20.717019 sshd[5812]: Accepted publickey for core from 147.75.109.163 port 34576 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:20.723025 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:20.731896 systemd-logind[1957]: New session 16 of user core. Oct 9 01:07:20.738366 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:07:21.325072 sshd[5812]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:21.336723 systemd[1]: sshd@15-172.31.16.164:22-147.75.109.163:34576.service: Deactivated successfully. Oct 9 01:07:21.337319 systemd-logind[1957]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:07:21.340525 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:07:21.341876 systemd-logind[1957]: Removed session 16. Oct 9 01:07:26.363405 systemd[1]: Started sshd@16-172.31.16.164:22-147.75.109.163:34590.service - OpenSSH per-connection server daemon (147.75.109.163:34590). Oct 9 01:07:26.541136 sshd[5836]: Accepted publickey for core from 147.75.109.163 port 34590 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:26.543605 sshd[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:26.553484 systemd-logind[1957]: New session 17 of user core. Oct 9 01:07:26.563011 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:07:26.784145 sshd[5836]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:26.795744 systemd[1]: sshd@16-172.31.16.164:22-147.75.109.163:34590.service: Deactivated successfully. Oct 9 01:07:26.799039 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:07:26.800734 systemd-logind[1957]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:07:26.802622 systemd-logind[1957]: Removed session 17. Oct 9 01:07:26.820346 systemd[1]: Started sshd@17-172.31.16.164:22-147.75.109.163:34602.service - OpenSSH per-connection server daemon (147.75.109.163:34602). Oct 9 01:07:26.983692 sshd[5849]: Accepted publickey for core from 147.75.109.163 port 34602 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:26.984973 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:26.990879 systemd-logind[1957]: New session 18 of user core. Oct 9 01:07:26.996025 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:07:27.618204 sshd[5849]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:27.621356 systemd[1]: sshd@17-172.31.16.164:22-147.75.109.163:34602.service: Deactivated successfully. Oct 9 01:07:27.623669 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:07:27.625905 systemd-logind[1957]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:07:27.627250 systemd-logind[1957]: Removed session 18. Oct 9 01:07:27.652173 systemd[1]: Started sshd@18-172.31.16.164:22-147.75.109.163:47076.service - OpenSSH per-connection server daemon (147.75.109.163:47076). Oct 9 01:07:27.805365 sshd[5863]: Accepted publickey for core from 147.75.109.163 port 47076 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:27.806984 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:27.811249 systemd-logind[1957]: New session 19 of user core. Oct 9 01:07:27.818041 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:07:29.823282 systemd[1]: run-containerd-runc-k8s.io-7e280a1c19cf4c1621b0b315c40c9d1bf799ca4024cb8216b994d8c904e91fcc-runc.kEPbo1.mount: Deactivated successfully. Oct 9 01:07:30.286116 sshd[5863]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:30.298181 systemd[1]: sshd@18-172.31.16.164:22-147.75.109.163:47076.service: Deactivated successfully. Oct 9 01:07:30.304940 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:07:30.308722 systemd-logind[1957]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:07:30.328306 systemd[1]: Started sshd@19-172.31.16.164:22-147.75.109.163:47086.service - OpenSSH per-connection server daemon (147.75.109.163:47086). Oct 9 01:07:30.330903 systemd-logind[1957]: Removed session 19. Oct 9 01:07:30.518262 sshd[5904]: Accepted publickey for core from 147.75.109.163 port 47086 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:30.521122 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:30.528371 systemd-logind[1957]: New session 20 of user core. Oct 9 01:07:30.532282 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:07:31.311069 sshd[5904]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:31.321233 systemd[1]: sshd@19-172.31.16.164:22-147.75.109.163:47086.service: Deactivated successfully. Oct 9 01:07:31.326414 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:07:31.331718 systemd-logind[1957]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:07:31.352195 systemd[1]: Started sshd@20-172.31.16.164:22-147.75.109.163:47098.service - OpenSSH per-connection server daemon (147.75.109.163:47098). Oct 9 01:07:31.353959 systemd-logind[1957]: Removed session 20. Oct 9 01:07:31.506452 sshd[5915]: Accepted publickey for core from 147.75.109.163 port 47098 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:31.508057 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:31.512926 systemd-logind[1957]: New session 21 of user core. Oct 9 01:07:31.520055 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:07:31.768928 sshd[5915]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:31.773393 systemd[1]: sshd@20-172.31.16.164:22-147.75.109.163:47098.service: Deactivated successfully. Oct 9 01:07:31.775659 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:07:31.777108 systemd-logind[1957]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:07:31.778570 systemd-logind[1957]: Removed session 21. Oct 9 01:07:36.817227 systemd[1]: Started sshd@21-172.31.16.164:22-147.75.109.163:47112.service - OpenSSH per-connection server daemon (147.75.109.163:47112). Oct 9 01:07:36.993855 kubelet[3286]: I1009 01:07:36.987008 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-554cf" podStartSLOduration=75.598443477 podStartE2EDuration="1m19.963068875s" podCreationTimestamp="2024-10-09 01:06:17 +0000 UTC" firstStartedPulling="2024-10-09 01:06:47.851373396 +0000 UTC m=+52.434500341" lastFinishedPulling="2024-10-09 01:06:52.215998786 +0000 UTC m=+56.799125739" observedRunningTime="2024-10-09 01:06:52.606847262 +0000 UTC m=+57.189974224" watchObservedRunningTime="2024-10-09 01:07:36.963068875 +0000 UTC m=+101.546195839" Oct 9 01:07:37.020558 kubelet[3286]: I1009 01:07:37.020351 3286 topology_manager.go:215] "Topology Admit Handler" podUID="ffbfda74-786d-459d-a79c-37f0b3142bf5" podNamespace="calico-apiserver" podName="calico-apiserver-67ffdcd47d-7tr9k" Oct 9 01:07:37.029673 sshd[5934]: Accepted publickey for core from 147.75.109.163 port 47112 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:37.037744 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:37.085267 systemd-logind[1957]: New session 22 of user core. Oct 9 01:07:37.092042 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:07:37.154575 kubelet[3286]: I1009 01:07:37.154343 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffbfda74-786d-459d-a79c-37f0b3142bf5-calico-apiserver-certs\") pod \"calico-apiserver-67ffdcd47d-7tr9k\" (UID: \"ffbfda74-786d-459d-a79c-37f0b3142bf5\") " pod="calico-apiserver/calico-apiserver-67ffdcd47d-7tr9k" Oct 9 01:07:37.154575 kubelet[3286]: I1009 01:07:37.154456 3286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsr4m\" (UniqueName: \"kubernetes.io/projected/ffbfda74-786d-459d-a79c-37f0b3142bf5-kube-api-access-dsr4m\") pod \"calico-apiserver-67ffdcd47d-7tr9k\" (UID: \"ffbfda74-786d-459d-a79c-37f0b3142bf5\") " pod="calico-apiserver/calico-apiserver-67ffdcd47d-7tr9k" Oct 9 01:07:37.163076 systemd[1]: Created slice kubepods-besteffort-podffbfda74_786d_459d_a79c_37f0b3142bf5.slice - libcontainer container kubepods-besteffort-podffbfda74_786d_459d_a79c_37f0b3142bf5.slice. Oct 9 01:07:37.289042 kubelet[3286]: E1009 01:07:37.288856 3286 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:07:37.328619 kubelet[3286]: E1009 01:07:37.327590 3286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffbfda74-786d-459d-a79c-37f0b3142bf5-calico-apiserver-certs podName:ffbfda74-786d-459d-a79c-37f0b3142bf5 nodeName:}" failed. No retries permitted until 2024-10-09 01:07:37.796181787 +0000 UTC m=+102.379308748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ffbfda74-786d-459d-a79c-37f0b3142bf5-calico-apiserver-certs") pod "calico-apiserver-67ffdcd47d-7tr9k" (UID: "ffbfda74-786d-459d-a79c-37f0b3142bf5") : secret "calico-apiserver-certs" not found Oct 9 01:07:37.582726 sshd[5934]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:37.591222 systemd[1]: sshd@21-172.31.16.164:22-147.75.109.163:47112.service: Deactivated successfully. Oct 9 01:07:37.595807 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:07:37.597491 systemd-logind[1957]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:07:37.600534 systemd-logind[1957]: Removed session 22. Oct 9 01:07:38.071172 containerd[1978]: time="2024-10-09T01:07:38.071036455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67ffdcd47d-7tr9k,Uid:ffbfda74-786d-459d-a79c-37f0b3142bf5,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:07:38.357563 systemd-networkd[1817]: cali748724d090d: Link UP Oct 9 01:07:38.358492 systemd-networkd[1817]: cali748724d090d: Gained carrier Oct 9 01:07:38.364807 (udev-worker)[5970]: Network interface NamePolicy= disabled on kernel command line. Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.224 [INFO][5953] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0 calico-apiserver-67ffdcd47d- calico-apiserver ffbfda74-786d-459d-a79c-37f0b3142bf5 1124 0 2024-10-09 01:07:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67ffdcd47d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-164 calico-apiserver-67ffdcd47d-7tr9k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali748724d090d [] []}} ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.224 [INFO][5953] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.288 [INFO][5964] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" HandleID="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Workload="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.307 [INFO][5964] ipam_plugin.go 270: Auto assigning IP ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" HandleID="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Workload="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-164", "pod":"calico-apiserver-67ffdcd47d-7tr9k", "timestamp":"2024-10-09 01:07:38.288543532 +0000 UTC"}, Hostname:"ip-172-31-16-164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.308 [INFO][5964] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.308 [INFO][5964] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.308 [INFO][5964] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-164' Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.310 [INFO][5964] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.321 [INFO][5964] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.326 [INFO][5964] ipam.go 489: Trying affinity for 192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.331 [INFO][5964] ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.334 [INFO][5964] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.334 [INFO][5964] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.336 [INFO][5964] ipam.go 1685: Creating new handle: k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429 Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.340 [INFO][5964] ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.350 [INFO][5964] ipam.go 1216: Successfully claimed IPs: [192.168.56.69/26] block=192.168.56.64/26 handle="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.350 [INFO][5964] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.69/26] handle="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" host="ip-172-31-16-164" Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.350 [INFO][5964] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:38.380902 containerd[1978]: 2024-10-09 01:07:38.350 [INFO][5964] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.56.69/26] IPv6=[] ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" HandleID="k8s-pod-network.57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Workload="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.381998 containerd[1978]: 2024-10-09 01:07:38.354 [INFO][5953] k8s.go 386: Populated endpoint ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0", GenerateName:"calico-apiserver-67ffdcd47d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffbfda74-786d-459d-a79c-37f0b3142bf5", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67ffdcd47d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"", Pod:"calico-apiserver-67ffdcd47d-7tr9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali748724d090d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:38.381998 containerd[1978]: 2024-10-09 01:07:38.354 [INFO][5953] k8s.go 387: Calico CNI using IPs: [192.168.56.69/32] ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.381998 containerd[1978]: 2024-10-09 01:07:38.354 [INFO][5953] dataplane_linux.go 68: Setting the host side veth name to cali748724d090d ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.381998 containerd[1978]: 2024-10-09 01:07:38.357 [INFO][5953] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.381998 containerd[1978]: 2024-10-09 01:07:38.357 [INFO][5953] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0", GenerateName:"calico-apiserver-67ffdcd47d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffbfda74-786d-459d-a79c-37f0b3142bf5", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67ffdcd47d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-164", ContainerID:"57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429", Pod:"calico-apiserver-67ffdcd47d-7tr9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali748724d090d", MAC:"4a:b5:a6:88:b8:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:38.381998 containerd[1978]: 2024-10-09 01:07:38.375 [INFO][5953] k8s.go 500: Wrote updated endpoint to datastore ContainerID="57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429" Namespace="calico-apiserver" Pod="calico-apiserver-67ffdcd47d-7tr9k" WorkloadEndpoint="ip--172--31--16--164-k8s-calico--apiserver--67ffdcd47d--7tr9k-eth0" Oct 9 01:07:38.453279 containerd[1978]: time="2024-10-09T01:07:38.453026457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:38.453279 containerd[1978]: time="2024-10-09T01:07:38.453188737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:38.453279 containerd[1978]: time="2024-10-09T01:07:38.453229617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:38.458911 containerd[1978]: time="2024-10-09T01:07:38.456225455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:38.501026 systemd[1]: Started cri-containerd-57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429.scope - libcontainer container 57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429. Oct 9 01:07:38.596469 containerd[1978]: time="2024-10-09T01:07:38.596383784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67ffdcd47d-7tr9k,Uid:ffbfda74-786d-459d-a79c-37f0b3142bf5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429\"" Oct 9 01:07:38.600856 containerd[1978]: time="2024-10-09T01:07:38.600505565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:07:39.676048 systemd-networkd[1817]: cali748724d090d: Gained IPv6LL Oct 9 01:07:42.560216 ntpd[1949]: Listen normally on 13 cali748724d090d [fe80::ecee:eeff:feee:eeee%11]:123 Oct 9 01:07:42.560776 ntpd[1949]: 9 Oct 01:07:42 ntpd[1949]: Listen normally on 13 cali748724d090d [fe80::ecee:eeff:feee:eeee%11]:123 Oct 9 01:07:42.638579 systemd[1]: Started sshd@22-172.31.16.164:22-147.75.109.163:45632.service - OpenSSH per-connection server daemon (147.75.109.163:45632). Oct 9 01:07:42.658729 containerd[1978]: time="2024-10-09T01:07:42.658119558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 01:07:42.675908 containerd[1978]: time="2024-10-09T01:07:42.675574105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.075025319s" Oct 9 01:07:42.675908 containerd[1978]: time="2024-10-09T01:07:42.675626126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:07:42.688691 containerd[1978]: time="2024-10-09T01:07:42.688654086Z" level=info msg="CreateContainer within sandbox \"57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:07:42.734844 containerd[1978]: time="2024-10-09T01:07:42.733861324Z" level=info msg="CreateContainer within sandbox \"57dd9b61ef27dda0625a8421d664a60de253c3c1835067bcff2c14d75e8fd429\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"433cc8581597900fefbd2251f595bd6b0ab470f1709545245fc0f6ef7dfda4bf\"" Oct 9 01:07:42.738123 containerd[1978]: time="2024-10-09T01:07:42.737228321Z" level=info msg="StartContainer for \"433cc8581597900fefbd2251f595bd6b0ab470f1709545245fc0f6ef7dfda4bf\"" Oct 9 01:07:42.809892 containerd[1978]: time="2024-10-09T01:07:42.809837436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:42.811560 containerd[1978]: time="2024-10-09T01:07:42.811446436Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:42.814140 containerd[1978]: time="2024-10-09T01:07:42.813484113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:42.832096 systemd[1]: Started cri-containerd-433cc8581597900fefbd2251f595bd6b0ab470f1709545245fc0f6ef7dfda4bf.scope - libcontainer container 433cc8581597900fefbd2251f595bd6b0ab470f1709545245fc0f6ef7dfda4bf. Oct 9 01:07:42.931262 containerd[1978]: time="2024-10-09T01:07:42.931214234Z" level=info msg="StartContainer for \"433cc8581597900fefbd2251f595bd6b0ab470f1709545245fc0f6ef7dfda4bf\" returns successfully" Oct 9 01:07:42.949972 sshd[6053]: Accepted publickey for core from 147.75.109.163 port 45632 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:42.953265 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:42.962697 systemd-logind[1957]: New session 23 of user core. Oct 9 01:07:42.970898 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:07:43.857213 sshd[6053]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:43.863422 systemd[1]: sshd@22-172.31.16.164:22-147.75.109.163:45632.service: Deactivated successfully. Oct 9 01:07:43.869150 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:07:43.871059 systemd-logind[1957]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:07:43.872330 systemd-logind[1957]: Removed session 23. Oct 9 01:07:44.875473 kubelet[3286]: I1009 01:07:44.875398 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67ffdcd47d-7tr9k" podStartSLOduration=4.795865622 podStartE2EDuration="8.875370383s" podCreationTimestamp="2024-10-09 01:07:36 +0000 UTC" firstStartedPulling="2024-10-09 01:07:38.599902045 +0000 UTC m=+103.183028997" lastFinishedPulling="2024-10-09 01:07:42.679406806 +0000 UTC m=+107.262533758" observedRunningTime="2024-10-09 01:07:43.81417194 +0000 UTC m=+108.397298903" watchObservedRunningTime="2024-10-09 01:07:44.875370383 +0000 UTC m=+109.458497345" Oct 9 01:07:47.425521 systemd[1]: run-containerd-runc-k8s.io-ebb2747a569420c60e9ed5b44b8e52ef5fd5b0a78d3b68533eba2920f9b0cabe-runc.8yAQNg.mount: Deactivated successfully. Oct 9 01:07:48.896184 systemd[1]: Started sshd@23-172.31.16.164:22-147.75.109.163:49870.service - OpenSSH per-connection server daemon (147.75.109.163:49870). Oct 9 01:07:49.091272 sshd[6149]: Accepted publickey for core from 147.75.109.163 port 49870 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:49.110771 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:49.125984 systemd-logind[1957]: New session 24 of user core. Oct 9 01:07:49.134482 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:07:49.581193 sshd[6149]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:49.589526 systemd-logind[1957]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:07:49.590282 systemd[1]: sshd@23-172.31.16.164:22-147.75.109.163:49870.service: Deactivated successfully. Oct 9 01:07:49.593601 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:07:49.596061 systemd-logind[1957]: Removed session 24. Oct 9 01:07:54.621620 systemd[1]: Started sshd@24-172.31.16.164:22-147.75.109.163:49874.service - OpenSSH per-connection server daemon (147.75.109.163:49874). Oct 9 01:07:54.801811 sshd[6163]: Accepted publickey for core from 147.75.109.163 port 49874 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:07:54.804694 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:54.813764 systemd-logind[1957]: New session 25 of user core. Oct 9 01:07:54.820067 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:07:55.096085 sshd[6163]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:55.102573 systemd[1]: sshd@24-172.31.16.164:22-147.75.109.163:49874.service: Deactivated successfully. Oct 9 01:07:55.106660 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:07:55.109992 systemd-logind[1957]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:07:55.115234 systemd-logind[1957]: Removed session 25. Oct 9 01:08:00.158942 systemd[1]: Started sshd@25-172.31.16.164:22-147.75.109.163:44506.service - OpenSSH per-connection server daemon (147.75.109.163:44506). Oct 9 01:08:00.337670 sshd[6205]: Accepted publickey for core from 147.75.109.163 port 44506 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:08:00.340725 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:00.350552 systemd-logind[1957]: New session 26 of user core. Oct 9 01:08:00.359085 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 01:08:00.736418 sshd[6205]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:00.744887 systemd-logind[1957]: Session 26 logged out. Waiting for processes to exit. Oct 9 01:08:00.745470 systemd[1]: sshd@25-172.31.16.164:22-147.75.109.163:44506.service: Deactivated successfully. Oct 9 01:08:00.748102 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 01:08:00.749321 systemd-logind[1957]: Removed session 26. Oct 9 01:08:05.790793 systemd[1]: Started sshd@26-172.31.16.164:22-147.75.109.163:44512.service - OpenSSH per-connection server daemon (147.75.109.163:44512). Oct 9 01:08:06.012716 sshd[6223]: Accepted publickey for core from 147.75.109.163 port 44512 ssh2: RSA SHA256:FhUkU4jerkfg/5zPvNrck26EEx2ZRZbowWXOKukiRxM Oct 9 01:08:06.015368 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:06.026879 systemd-logind[1957]: New session 27 of user core. Oct 9 01:08:06.031158 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 01:08:06.376639 sshd[6223]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:06.384238 systemd[1]: sshd@26-172.31.16.164:22-147.75.109.163:44512.service: Deactivated successfully. Oct 9 01:08:06.386938 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 01:08:06.387772 systemd-logind[1957]: Session 27 logged out. Waiting for processes to exit. Oct 9 01:08:06.388962 systemd-logind[1957]: Removed session 27. Oct 9 01:08:21.302537 systemd[1]: cri-containerd-83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790.scope: Deactivated successfully. Oct 9 01:08:21.303165 systemd[1]: cri-containerd-83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790.scope: Consumed 5.973s CPU time. Oct 9 01:08:21.381678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790-rootfs.mount: Deactivated successfully. Oct 9 01:08:21.403668 containerd[1978]: time="2024-10-09T01:08:21.382579712Z" level=info msg="shim disconnected" id=83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790 namespace=k8s.io Oct 9 01:08:21.404249 containerd[1978]: time="2024-10-09T01:08:21.403670915Z" level=warning msg="cleaning up after shim disconnected" id=83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790 namespace=k8s.io Oct 9 01:08:21.404249 containerd[1978]: time="2024-10-09T01:08:21.403688281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:08:21.642063 systemd[1]: cri-containerd-41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164.scope: Deactivated successfully. Oct 9 01:08:21.642513 systemd[1]: cri-containerd-41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164.scope: Consumed 3.626s CPU time, 22.6M memory peak, 0B memory swap peak. Oct 9 01:08:21.690216 containerd[1978]: time="2024-10-09T01:08:21.689968130Z" level=info msg="shim disconnected" id=41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164 namespace=k8s.io Oct 9 01:08:21.690216 containerd[1978]: time="2024-10-09T01:08:21.690029858Z" level=warning msg="cleaning up after shim disconnected" id=41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164 namespace=k8s.io Oct 9 01:08:21.690216 containerd[1978]: time="2024-10-09T01:08:21.690042384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:08:21.695158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164-rootfs.mount: Deactivated successfully. Oct 9 01:08:21.943950 kubelet[3286]: I1009 01:08:21.943130 3286 scope.go:117] "RemoveContainer" containerID="41fd0c5fc95519e85337c057d4f9b4b3d8ffe6f8d3b87531bb3fd7b189ece164" Oct 9 01:08:21.943950 kubelet[3286]: I1009 01:08:21.943741 3286 scope.go:117] "RemoveContainer" containerID="83378590a9fb5ab00649279608b87dd1939f58192c04b58ad3b42b49f9e9e790" Oct 9 01:08:21.976017 containerd[1978]: time="2024-10-09T01:08:21.975495911Z" level=info msg="CreateContainer within sandbox \"b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 9 01:08:22.007074 containerd[1978]: time="2024-10-09T01:08:22.006489892Z" level=info msg="CreateContainer within sandbox \"486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 9 01:08:22.029598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209916524.mount: Deactivated successfully. Oct 9 01:08:22.046550 containerd[1978]: time="2024-10-09T01:08:22.046502073Z" level=info msg="CreateContainer within sandbox \"b91c42d700fcc41b2466ac71355f63fb747d43358f83632740e2ef1b6cbe2a2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cca5d302650534a46a14a65156cd1938301c19543b9602db33f25e87b78fa910\"" Oct 9 01:08:22.047665 containerd[1978]: time="2024-10-09T01:08:22.047628620Z" level=info msg="StartContainer for \"cca5d302650534a46a14a65156cd1938301c19543b9602db33f25e87b78fa910\"" Oct 9 01:08:22.055125 containerd[1978]: time="2024-10-09T01:08:22.055075411Z" level=info msg="CreateContainer within sandbox \"486ad755035f97958e6dcdf7af700f6f99523e14c73d130b7913275ea78c40e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"703337dbaaf86cb04e1d9d5ea29680763063dd5d4d54c2ec5227552a8e1dcd48\"" Oct 9 01:08:22.064495 containerd[1978]: time="2024-10-09T01:08:22.064039637Z" level=info msg="StartContainer for \"703337dbaaf86cb04e1d9d5ea29680763063dd5d4d54c2ec5227552a8e1dcd48\"" Oct 9 01:08:22.130148 systemd[1]: Started cri-containerd-cca5d302650534a46a14a65156cd1938301c19543b9602db33f25e87b78fa910.scope - libcontainer container cca5d302650534a46a14a65156cd1938301c19543b9602db33f25e87b78fa910. Oct 9 01:08:22.198129 systemd[1]: Started cri-containerd-703337dbaaf86cb04e1d9d5ea29680763063dd5d4d54c2ec5227552a8e1dcd48.scope - libcontainer container 703337dbaaf86cb04e1d9d5ea29680763063dd5d4d54c2ec5227552a8e1dcd48. Oct 9 01:08:22.239911 containerd[1978]: time="2024-10-09T01:08:22.239542708Z" level=info msg="StartContainer for \"cca5d302650534a46a14a65156cd1938301c19543b9602db33f25e87b78fa910\" returns successfully" Oct 9 01:08:22.278577 containerd[1978]: time="2024-10-09T01:08:22.278016928Z" level=info msg="StartContainer for \"703337dbaaf86cb04e1d9d5ea29680763063dd5d4d54c2ec5227552a8e1dcd48\" returns successfully" Oct 9 01:08:25.920092 systemd[1]: cri-containerd-8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57.scope: Deactivated successfully. Oct 9 01:08:25.923233 systemd[1]: cri-containerd-8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57.scope: Consumed 1.543s CPU time, 16.6M memory peak, 0B memory swap peak. Oct 9 01:08:25.962099 containerd[1978]: time="2024-10-09T01:08:25.961957698Z" level=info msg="shim disconnected" id=8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57 namespace=k8s.io Oct 9 01:08:25.962099 containerd[1978]: time="2024-10-09T01:08:25.962081850Z" level=warning msg="cleaning up after shim disconnected" id=8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57 namespace=k8s.io Oct 9 01:08:25.962099 containerd[1978]: time="2024-10-09T01:08:25.962097884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:08:25.967806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57-rootfs.mount: Deactivated successfully. Oct 9 01:08:26.983791 kubelet[3286]: I1009 01:08:26.983757 3286 scope.go:117] "RemoveContainer" containerID="8f8020a19f68e109e78dddba430dfe8c0e27c69bd43e840047b99fd11d16de57" Oct 9 01:08:26.989446 containerd[1978]: time="2024-10-09T01:08:26.989399763Z" level=info msg="CreateContainer within sandbox \"a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 9 01:08:27.024554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125519696.mount: Deactivated successfully. Oct 9 01:08:27.025032 containerd[1978]: time="2024-10-09T01:08:27.024988013Z" level=info msg="CreateContainer within sandbox \"a7fe7a47dea88e8cecad4a90fe8ab07c9fd66793d6ef61c619cd1a5244812f1d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"655efdf29f9160192496e65ccd65de623a0835d20adafee6de436e74e965187e\"" Oct 9 01:08:27.025620 containerd[1978]: time="2024-10-09T01:08:27.025508841Z" level=info msg="StartContainer for \"655efdf29f9160192496e65ccd65de623a0835d20adafee6de436e74e965187e\"" Oct 9 01:08:27.078983 systemd[1]: run-containerd-runc-k8s.io-655efdf29f9160192496e65ccd65de623a0835d20adafee6de436e74e965187e-runc.fm8aV6.mount: Deactivated successfully. Oct 9 01:08:27.092066 systemd[1]: Started cri-containerd-655efdf29f9160192496e65ccd65de623a0835d20adafee6de436e74e965187e.scope - libcontainer container 655efdf29f9160192496e65ccd65de623a0835d20adafee6de436e74e965187e. Oct 9 01:08:27.161904 containerd[1978]: time="2024-10-09T01:08:27.161783245Z" level=info msg="StartContainer for \"655efdf29f9160192496e65ccd65de623a0835d20adafee6de436e74e965187e\" returns successfully" Oct 9 01:08:28.290966 kubelet[3286]: E1009 01:08:28.285041 3286 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-164?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 9 01:08:38.291566 kubelet[3286]: E1009 01:08:38.291494 3286 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-164?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"