Nov 12 20:48:26.342900 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:48:26.342941 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:48:26.342958 kernel: BIOS-provided physical RAM map: Nov 12 20:48:26.342970 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:48:26.342981 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:48:26.342993 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:48:26.343011 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Nov 12 20:48:26.343024 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Nov 12 20:48:26.343036 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Nov 12 20:48:26.343048 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:48:26.343061 kernel: NX (Execute Disable) protection: active Nov 12 20:48:26.343073 kernel: APIC: Static calls initialized Nov 12 20:48:26.343085 kernel: SMBIOS 2.7 present. Nov 12 20:48:26.343098 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 12 20:48:26.343117 kernel: Hypervisor detected: KVM Nov 12 20:48:26.343131 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:48:26.343145 kernel: kvm-clock: using sched offset of 8331475958 cycles Nov 12 20:48:26.343160 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:48:26.343174 kernel: tsc: Detected 2499.996 MHz processor Nov 12 20:48:26.343188 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:48:26.343203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:48:26.343220 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Nov 12 20:48:26.343234 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:48:26.343248 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:48:26.343263 kernel: Using GB pages for direct mapping Nov 12 20:48:26.343277 kernel: ACPI: Early table checksum verification disabled Nov 12 20:48:26.343291 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Nov 12 20:48:26.343306 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Nov 12 20:48:26.343320 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 12 20:48:26.343334 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 12 20:48:26.343351 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Nov 12 20:48:26.343366 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 12 20:48:26.343380 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 12 20:48:26.343394 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 12 20:48:26.343409 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 12 20:48:26.343423 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 12 20:48:26.343438 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 12 20:48:26.343452 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 12 20:48:26.343466 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Nov 12 20:48:26.343502 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Nov 12 20:48:26.343523 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Nov 12 20:48:26.343538 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Nov 12 20:48:26.343554 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Nov 12 20:48:26.343570 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Nov 12 20:48:26.343589 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Nov 12 20:48:26.343605 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Nov 12 20:48:26.343621 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Nov 12 20:48:26.343637 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Nov 12 20:48:26.343653 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:48:26.343669 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:48:26.343685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 12 20:48:26.343701 kernel: NUMA: Initialized distance table, cnt=1 Nov 12 20:48:26.343716 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Nov 12 20:48:26.343736 kernel: Zone ranges: Nov 12 20:48:26.343824 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:48:26.343842 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Nov 12 20:48:26.343858 kernel: Normal empty Nov 12 20:48:26.343875 kernel: Movable zone start for each node Nov 12 20:48:26.343891 kernel: Early memory node ranges Nov 12 20:48:26.343907 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:48:26.343923 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Nov 12 20:48:26.343939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Nov 12 20:48:26.343955 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:48:26.343974 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:48:26.343990 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Nov 12 20:48:26.344006 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 12 20:48:26.344022 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:48:26.344037 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 12 20:48:26.344053 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:48:26.344069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:48:26.344085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:48:26.344101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:48:26.344120 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:48:26.344135 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:48:26.344286 kernel: TSC deadline timer available Nov 12 20:48:26.344301 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:48:26.344316 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:48:26.344332 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Nov 12 20:48:26.344347 kernel: Booting paravirtualized kernel on KVM Nov 12 20:48:26.344363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:48:26.344377 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:48:26.344396 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:48:26.344411 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:48:26.344426 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:48:26.344441 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:48:26.344457 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:48:26.344474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:48:26.344501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:48:26.344516 kernel: random: crng init done Nov 12 20:48:26.344536 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:48:26.344551 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:48:26.344567 kernel: Fallback order for Node 0: 0 Nov 12 20:48:26.344582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Nov 12 20:48:26.344598 kernel: Policy zone: DMA32 Nov 12 20:48:26.344614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:48:26.344630 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125152K reserved, 0K cma-reserved) Nov 12 20:48:26.344646 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:48:26.344660 kernel: Kernel/User page tables isolation: enabled Nov 12 20:48:26.344679 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:48:26.344694 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:48:26.344710 kernel: Dynamic Preempt: voluntary Nov 12 20:48:26.344726 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:48:26.344742 kernel: rcu: RCU event tracing is enabled. Nov 12 20:48:26.344758 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:48:26.344774 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:48:26.344790 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:48:26.344805 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:48:26.344824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:48:26.344840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:48:26.344855 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:48:26.344869 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:48:26.344884 kernel: Console: colour VGA+ 80x25 Nov 12 20:48:26.344899 kernel: printk: console [ttyS0] enabled Nov 12 20:48:26.344915 kernel: ACPI: Core revision 20230628 Nov 12 20:48:26.344931 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 12 20:48:26.344946 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:48:26.344965 kernel: x2apic enabled Nov 12 20:48:26.344981 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:48:26.345009 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 12 20:48:26.345029 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 12 20:48:26.345045 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:48:26.345062 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:48:26.345078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:48:26.345095 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:48:26.345111 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:48:26.345126 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:48:26.345143 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:48:26.345159 kernel: RETBleed: Vulnerable Nov 12 20:48:26.345175 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:48:26.345195 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:48:26.345211 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:48:26.345228 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:48:26.345244 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:48:26.345260 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:48:26.345277 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:48:26.345296 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 12 20:48:26.345312 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 12 20:48:26.345328 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:48:26.345344 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:48:26.345360 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:48:26.345377 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 12 20:48:26.345393 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:48:26.345409 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 12 20:48:26.345426 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 12 20:48:26.345442 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 12 20:48:26.345458 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 12 20:48:26.345484 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 12 20:48:26.345498 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 12 20:48:26.345509 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 12 20:48:26.345521 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:48:26.345532 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:48:26.345546 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:48:26.345560 kernel: landlock: Up and running. Nov 12 20:48:26.345575 kernel: SELinux: Initializing. Nov 12 20:48:26.345588 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:48:26.345601 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:48:26.345615 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:48:26.345633 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:48:26.345649 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:48:26.345662 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:48:26.345676 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:48:26.345692 kernel: signal: max sigframe size: 3632 Nov 12 20:48:26.345709 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:48:26.345726 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:48:26.345743 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:48:26.345760 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:48:26.345779 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:48:26.345797 kernel: .... node #0, CPUs: #1 Nov 12 20:48:26.345812 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 12 20:48:26.345827 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:48:26.345841 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:48:26.345854 kernel: smpboot: Max logical packages: 1 Nov 12 20:48:26.345867 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 12 20:48:26.345884 kernel: devtmpfs: initialized Nov 12 20:48:26.345899 kernel: x86/mm: Memory block size: 128MB Nov 12 20:48:26.345918 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:48:26.345935 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:48:26.345951 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:48:26.345968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:48:26.345985 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:48:26.346003 kernel: audit: type=2000 audit(1731444504.544:1): state=initialized audit_enabled=0 res=1 Nov 12 20:48:26.346020 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:48:26.346037 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:48:26.346054 kernel: cpuidle: using governor menu Nov 12 20:48:26.346075 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:48:26.346088 kernel: dca service started, version 1.12.1 Nov 12 20:48:26.346103 kernel: PCI: Using configuration type 1 for base access Nov 12 20:48:26.346119 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:48:26.346135 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:48:26.346150 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:48:26.346164 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:48:26.346177 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:48:26.346196 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:48:26.346211 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:48:26.346225 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:48:26.346240 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:48:26.346253 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 12 20:48:26.346266 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:48:26.346279 kernel: ACPI: Interpreter enabled Nov 12 20:48:26.346291 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:48:26.346305 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:48:26.346319 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:48:26.346337 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:48:26.346351 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 12 20:48:26.346365 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:48:26.346602 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:48:26.346737 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:48:26.346870 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:48:26.346889 kernel: acpiphp: Slot [3] registered Nov 12 20:48:26.346907 kernel: acpiphp: Slot [4] registered Nov 12 20:48:26.346921 kernel: acpiphp: Slot [5] registered Nov 12 20:48:26.346934 kernel: acpiphp: Slot [6] registered Nov 12 20:48:26.346948 kernel: acpiphp: Slot [7] registered Nov 12 20:48:26.346962 kernel: acpiphp: Slot [8] registered Nov 12 20:48:26.346975 kernel: acpiphp: Slot [9] registered Nov 12 20:48:26.346989 kernel: acpiphp: Slot [10] registered Nov 12 20:48:26.347003 kernel: acpiphp: Slot [11] registered Nov 12 20:48:26.347016 kernel: acpiphp: Slot [12] registered Nov 12 20:48:26.347032 kernel: acpiphp: Slot [13] registered Nov 12 20:48:26.347046 kernel: acpiphp: Slot [14] registered Nov 12 20:48:26.347059 kernel: acpiphp: Slot [15] registered Nov 12 20:48:26.347076 kernel: acpiphp: Slot [16] registered Nov 12 20:48:26.347092 kernel: acpiphp: Slot [17] registered Nov 12 20:48:26.347108 kernel: acpiphp: Slot [18] registered Nov 12 20:48:26.347122 kernel: acpiphp: Slot [19] registered Nov 12 20:48:26.347138 kernel: acpiphp: Slot [20] registered Nov 12 20:48:26.347155 kernel: acpiphp: Slot [21] registered Nov 12 20:48:26.347171 kernel: acpiphp: Slot [22] registered Nov 12 20:48:26.347191 kernel: acpiphp: Slot [23] registered Nov 12 20:48:26.347208 kernel: acpiphp: Slot [24] registered Nov 12 20:48:26.347224 kernel: acpiphp: Slot [25] registered Nov 12 20:48:26.347241 kernel: acpiphp: Slot [26] registered Nov 12 20:48:26.347257 kernel: acpiphp: Slot [27] registered Nov 12 20:48:26.347274 kernel: acpiphp: Slot [28] registered Nov 12 20:48:26.347290 kernel: acpiphp: Slot [29] registered Nov 12 20:48:26.347307 kernel: acpiphp: Slot [30] registered Nov 12 20:48:26.347323 kernel: acpiphp: Slot [31] registered Nov 12 20:48:26.347343 kernel: PCI host bridge to bus 0000:00 Nov 12 20:48:26.347520 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:48:26.347648 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:48:26.347867 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:48:26.348003 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 12 20:48:26.348124 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:48:26.348278 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:48:26.348431 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 12 20:48:26.348592 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 12 20:48:26.348729 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 12 20:48:26.348947 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Nov 12 20:48:26.349089 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 12 20:48:26.349225 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 12 20:48:26.349360 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 12 20:48:26.349516 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 12 20:48:26.351074 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 12 20:48:26.351296 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 12 20:48:26.351444 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 18554 usecs Nov 12 20:48:26.351622 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 12 20:48:26.351826 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Nov 12 20:48:26.351976 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 12 20:48:26.352115 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:48:26.352267 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 12 20:48:26.352391 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Nov 12 20:48:26.352546 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 12 20:48:26.352671 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Nov 12 20:48:26.352689 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:48:26.352708 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:48:26.352722 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:48:26.352735 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:48:26.352749 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:48:26.352763 kernel: iommu: Default domain type: Translated Nov 12 20:48:26.352777 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:48:26.352790 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:48:26.352804 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:48:26.352818 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:48:26.352835 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Nov 12 20:48:26.352957 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 12 20:48:26.353087 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 12 20:48:26.353223 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:48:26.353243 kernel: vgaarb: loaded Nov 12 20:48:26.353261 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 12 20:48:26.353278 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 12 20:48:26.353294 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:48:26.353311 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:48:26.353332 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:48:26.353348 kernel: pnp: PnP ACPI init Nov 12 20:48:26.353365 kernel: pnp: PnP ACPI: found 5 devices Nov 12 20:48:26.353382 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:48:26.353399 kernel: NET: Registered PF_INET protocol family Nov 12 20:48:26.353416 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:48:26.353433 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:48:26.353449 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:48:26.353466 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:48:26.353595 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:48:26.353612 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:48:26.353630 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:48:26.353646 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:48:26.353663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:48:26.353680 kernel: NET: Registered PF_XDP protocol family Nov 12 20:48:26.353820 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:48:26.353945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:48:26.354069 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:48:26.354189 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 12 20:48:26.354330 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:48:26.354351 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:48:26.354368 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:48:26.354386 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 12 20:48:26.354404 kernel: clocksource: Switched to clocksource tsc Nov 12 20:48:26.354420 kernel: Initialise system trusted keyrings Nov 12 20:48:26.354442 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:48:26.354459 kernel: Key type asymmetric registered Nov 12 20:48:26.354486 kernel: Asymmetric key parser 'x509' registered Nov 12 20:48:26.354500 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:48:26.354514 kernel: io scheduler mq-deadline registered Nov 12 20:48:26.354527 kernel: io scheduler kyber registered Nov 12 20:48:26.354541 kernel: io scheduler bfq registered Nov 12 20:48:26.354554 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:48:26.354567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:48:26.354585 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:48:26.354599 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:48:26.354613 kernel: i8042: Warning: Keylock active Nov 12 20:48:26.354627 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:48:26.354641 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:48:26.354790 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 12 20:48:26.355039 kernel: rtc_cmos 00:00: registered as rtc0 Nov 12 20:48:26.355218 kernel: rtc_cmos 00:00: setting system clock to 2024-11-12T20:48:25 UTC (1731444505) Nov 12 20:48:26.355345 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 12 20:48:26.355365 kernel: intel_pstate: CPU model not supported Nov 12 20:48:26.355381 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:48:26.355397 kernel: Segment Routing with IPv6 Nov 12 20:48:26.355413 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:48:26.355428 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:48:26.355444 kernel: Key type dns_resolver registered Nov 12 20:48:26.355457 kernel: IPI shorthand broadcast: enabled Nov 12 20:48:26.355471 kernel: sched_clock: Marking stable (850088128, 471362531)->(1577409347, -255958688) Nov 12 20:48:26.355519 kernel: registered taskstats version 1 Nov 12 20:48:26.355534 kernel: Loading compiled-in X.509 certificates Nov 12 20:48:26.355551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:48:26.355566 kernel: Key type .fscrypt registered Nov 12 20:48:26.355581 kernel: Key type fscrypt-provisioning registered Nov 12 20:48:26.355596 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:48:26.355610 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:48:26.355625 kernel: ima: No architecture policies found Nov 12 20:48:26.355641 kernel: clk: Disabling unused clocks Nov 12 20:48:26.355659 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:48:26.355674 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:48:26.355690 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:48:26.355704 kernel: Run /init as init process Nov 12 20:48:26.355719 kernel: with arguments: Nov 12 20:48:26.355733 kernel: /init Nov 12 20:48:26.355801 kernel: with environment: Nov 12 20:48:26.355818 kernel: HOME=/ Nov 12 20:48:26.355833 kernel: TERM=linux Nov 12 20:48:26.355852 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:48:26.355897 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:48:26.355915 systemd[1]: Detected virtualization amazon. Nov 12 20:48:26.355932 systemd[1]: Detected architecture x86-64. Nov 12 20:48:26.355947 systemd[1]: Running in initrd. Nov 12 20:48:26.355963 systemd[1]: No hostname configured, using default hostname. Nov 12 20:48:26.355977 systemd[1]: Hostname set to <localhost>. Nov 12 20:48:26.355997 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:48:26.356013 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:48:26.356030 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:48:26.356046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:48:26.356063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:48:26.356082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:48:26.356099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:48:26.356116 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:48:26.356133 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:48:26.356150 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:48:26.356167 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:48:26.356183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:48:26.356197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:48:26.356211 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:48:26.356230 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:48:26.356249 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:48:26.356267 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:48:26.356284 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:48:26.356302 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:48:26.356321 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:48:26.356339 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:48:26.356359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:48:26.356377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:48:26.356400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:48:26.356419 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:48:26.356438 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:48:26.356456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:48:26.356491 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:48:26.356518 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:48:26.356537 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:48:26.356556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:48:26.356575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:26.356626 systemd-journald[177]: Collecting audit messages is disabled. Nov 12 20:48:26.356667 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:48:26.356684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:48:26.356704 systemd-journald[177]: Journal started Nov 12 20:48:26.356741 systemd-journald[177]: Runtime Journal (/run/log/journal/ec2ad6d4c289e85ac0433912c3545cd7) is 4.8M, max 38.6M, 33.7M free. Nov 12 20:48:26.362863 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:48:26.361565 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:48:26.369840 systemd-modules-load[179]: Inserted module 'overlay' Nov 12 20:48:26.383507 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:48:26.399130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:48:26.442663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:48:26.484137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:48:26.770270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:48:26.770311 kernel: Bridge firewalling registered Nov 12 20:48:26.508203 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 12 20:48:26.788744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:48:26.797021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:26.797553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:48:26.849359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:48:26.870173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:48:26.898442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:48:26.916250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:48:26.942848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:48:26.956319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:26.968689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:48:27.035564 dracut-cmdline[215]: dracut-dracut-053 Nov 12 20:48:27.045670 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:48:27.063643 systemd-resolved[213]: Positive Trust Anchors: Nov 12 20:48:27.063659 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:48:27.063719 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:48:27.082025 systemd-resolved[213]: Defaulting to hostname 'linux'. Nov 12 20:48:27.084937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:48:27.096301 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:48:27.357341 kernel: SCSI subsystem initialized Nov 12 20:48:27.375501 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:48:27.399896 kernel: iscsi: registered transport (tcp) Nov 12 20:48:27.464469 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:48:27.464570 kernel: QLogic iSCSI HBA Driver Nov 12 20:48:27.557787 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:48:27.582736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:48:27.712782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:48:27.712870 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:48:27.712902 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:48:27.823629 kernel: raid6: avx512x4 gen() 6842 MB/s Nov 12 20:48:27.841613 kernel: raid6: avx512x2 gen() 7916 MB/s Nov 12 20:48:27.858523 kernel: raid6: avx512x1 gen() 4690 MB/s Nov 12 20:48:27.876840 kernel: raid6: avx2x4 gen() 6052 MB/s Nov 12 20:48:27.893538 kernel: raid6: avx2x2 gen() 13646 MB/s Nov 12 20:48:27.911699 kernel: raid6: avx2x1 gen() 8406 MB/s Nov 12 20:48:27.911948 kernel: raid6: using algorithm avx2x2 gen() 13646 MB/s Nov 12 20:48:27.931400 kernel: raid6: .... xor() 10966 MB/s, rmw enabled Nov 12 20:48:27.931518 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:48:27.968844 kernel: xor: automatically using best checksumming function avx Nov 12 20:48:28.264513 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:48:28.275847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:48:28.283869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:48:28.308436 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 12 20:48:28.315028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:48:28.330765 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:48:28.407542 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 12 20:48:28.552061 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:48:28.560776 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:48:28.717903 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:48:28.739569 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:48:28.843091 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:48:28.852385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:48:28.853703 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:48:28.853891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:48:28.880134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:48:28.973587 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:48:28.996503 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:48:29.012933 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 12 20:48:29.140388 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 12 20:48:29.140631 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:48:29.140661 kernel: AES CTR mode by8 optimization enabled Nov 12 20:48:29.140679 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 12 20:48:29.140856 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 12 20:48:29.140878 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 12 20:48:29.141035 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 12 20:48:29.141172 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ec:d8:86:6a:9b Nov 12 20:48:29.117854 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:48:29.118011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:29.153666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:48:29.153702 kernel: GPT:9289727 != 16777215 Nov 12 20:48:29.153720 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:48:29.153741 kernel: GPT:9289727 != 16777215 Nov 12 20:48:29.153758 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:48:29.153774 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:48:29.120721 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:48:29.122742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:48:29.122972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:29.124680 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:29.134950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:29.173632 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:48:29.369529 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (455) Nov 12 20:48:29.369562 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (452) Nov 12 20:48:29.371843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:29.381762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:48:29.406432 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 12 20:48:29.455528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 20:48:29.458052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:29.519177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 12 20:48:29.548501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 12 20:48:29.550417 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 12 20:48:29.565063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:48:29.596608 disk-uuid[629]: Primary Header is updated. Nov 12 20:48:29.596608 disk-uuid[629]: Secondary Entries is updated. Nov 12 20:48:29.596608 disk-uuid[629]: Secondary Header is updated. Nov 12 20:48:29.610380 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:48:29.620529 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:48:29.631500 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:48:30.638508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:48:30.639456 disk-uuid[630]: The operation has completed successfully. Nov 12 20:48:30.933983 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:48:30.934118 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:48:30.960812 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:48:30.987549 sh[973]: Success Nov 12 20:48:31.008239 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:48:31.137663 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:48:31.157733 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:48:31.162729 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:48:31.230507 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:48:31.230592 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:31.237574 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:48:31.237660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:48:31.238819 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:48:31.268592 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:48:31.273906 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:48:31.281327 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:48:31.294844 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:48:31.304776 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:48:31.361750 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:31.361839 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:31.361861 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 20:48:31.374818 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 20:48:31.406456 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:31.406016 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:48:31.426843 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:48:31.437749 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:48:31.659874 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:48:31.689826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:48:31.797860 ignition[1082]: Ignition 2.19.0 Nov 12 20:48:31.797886 ignition[1082]: Stage: fetch-offline Nov 12 20:48:31.798192 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:31.798204 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:31.812029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:48:31.798751 ignition[1082]: Ignition finished successfully Nov 12 20:48:31.843977 systemd-networkd[1167]: lo: Link UP Nov 12 20:48:31.844885 systemd-networkd[1167]: lo: Gained carrier Nov 12 20:48:31.851132 systemd-networkd[1167]: Enumeration completed Nov 12 20:48:31.851788 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:31.851793 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:48:31.853075 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:48:31.853703 systemd[1]: Reached target network.target - Network. Nov 12 20:48:31.877509 systemd-networkd[1167]: eth0: Link UP Nov 12 20:48:31.877515 systemd-networkd[1167]: eth0: Gained carrier Nov 12 20:48:31.877529 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:31.887813 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:48:31.919609 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.17.74/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 20:48:31.933116 ignition[1174]: Ignition 2.19.0 Nov 12 20:48:31.933130 ignition[1174]: Stage: fetch Nov 12 20:48:31.940812 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:31.940832 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:31.940952 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:32.001607 ignition[1174]: PUT result: OK Nov 12 20:48:32.010227 ignition[1174]: parsed url from cmdline: "" Nov 12 20:48:32.010238 ignition[1174]: no config URL provided Nov 12 20:48:32.010249 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:48:32.010263 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:48:32.010291 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:32.013150 ignition[1174]: PUT result: OK Nov 12 20:48:32.013225 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 12 20:48:32.020808 ignition[1174]: GET result: OK Nov 12 20:48:32.024100 ignition[1174]: parsing config with SHA512: ea6db6ed7e7935692feae6a6759acbacfb6f6e1a5ad36f4a59e765e6fe3972cba0eede95a9bdb4551cbfaf4cdf62a8111404b8d476e7a24dbd0933c1edabd194 Nov 12 20:48:32.031468 unknown[1174]: fetched base config from "system" Nov 12 20:48:32.031496 unknown[1174]: fetched base config from "system" Nov 12 20:48:32.032198 ignition[1174]: fetch: fetch complete Nov 12 20:48:32.031544 unknown[1174]: fetched user config from "aws" Nov 12 20:48:32.032206 ignition[1174]: fetch: fetch passed Nov 12 20:48:32.037459 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:48:32.032265 ignition[1174]: Ignition finished successfully Nov 12 20:48:32.046741 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:48:32.067397 ignition[1181]: Ignition 2.19.0 Nov 12 20:48:32.067412 ignition[1181]: Stage: kargs Nov 12 20:48:32.067926 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:32.067940 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:32.068047 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:32.069851 ignition[1181]: PUT result: OK Nov 12 20:48:32.075715 ignition[1181]: kargs: kargs passed Nov 12 20:48:32.075779 ignition[1181]: Ignition finished successfully Nov 12 20:48:32.086123 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:48:32.100428 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:48:32.176086 ignition[1187]: Ignition 2.19.0 Nov 12 20:48:32.176198 ignition[1187]: Stage: disks Nov 12 20:48:32.177022 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:32.177035 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:32.177141 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:32.181229 ignition[1187]: PUT result: OK Nov 12 20:48:32.206714 ignition[1187]: disks: disks passed Nov 12 20:48:32.206820 ignition[1187]: Ignition finished successfully Nov 12 20:48:32.211503 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:48:32.218597 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:48:32.225368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:48:32.233300 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:48:32.245417 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:48:32.251351 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:48:32.264734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:48:32.331377 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:48:32.338258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:48:32.375991 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:48:32.788252 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:48:32.790648 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:48:32.794538 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:48:32.816683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:48:32.840665 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:48:32.848900 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:48:32.849048 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:48:32.849096 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:48:32.892316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:48:32.902020 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) Nov 12 20:48:32.902623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:48:32.915293 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:32.915375 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:32.915396 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 20:48:32.925521 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 20:48:32.929154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:48:33.166247 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:48:33.194825 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:48:33.209465 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:48:33.221509 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:48:33.253674 systemd-networkd[1167]: eth0: Gained IPv6LL Nov 12 20:48:33.413057 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:48:33.428650 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:48:33.436712 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:48:33.460069 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:48:33.463563 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:33.531916 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:48:33.544029 ignition[1327]: INFO : Ignition 2.19.0 Nov 12 20:48:33.544029 ignition[1327]: INFO : Stage: mount Nov 12 20:48:33.548994 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:33.548994 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:33.548994 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:33.557428 ignition[1327]: INFO : PUT result: OK Nov 12 20:48:33.564958 ignition[1327]: INFO : mount: mount passed Nov 12 20:48:33.564958 ignition[1327]: INFO : Ignition finished successfully Nov 12 20:48:33.567111 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:48:33.577716 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:48:33.795812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:48:33.825499 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Nov 12 20:48:33.831647 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:33.831722 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:33.831742 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 20:48:33.837489 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 20:48:33.839586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:48:33.869066 ignition[1356]: INFO : Ignition 2.19.0 Nov 12 20:48:33.871975 ignition[1356]: INFO : Stage: files Nov 12 20:48:33.871975 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:33.871975 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:33.871975 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:33.871975 ignition[1356]: INFO : PUT result: OK Nov 12 20:48:33.896074 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:48:33.898950 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:48:33.898950 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:48:33.912506 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:48:33.914862 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:48:33.914862 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:48:33.913009 unknown[1356]: wrote ssh authorized keys file for user: core Nov 12 20:48:33.930056 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:48:33.930056 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:48:34.001514 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:48:34.234798 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:48:34.234798 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:48:34.250679 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:48:34.327101 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:48:34.327101 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:48:34.327101 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:48:34.327101 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:48:34.327101 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:48:34.327101 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:48:34.721522 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:48:35.230359 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:48:35.230359 ignition[1356]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:48:35.258766 ignition[1356]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:48:35.262207 ignition[1356]: INFO : files: files passed Nov 12 20:48:35.262207 ignition[1356]: INFO : Ignition finished successfully Nov 12 20:48:35.263666 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:48:35.346385 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:48:35.354693 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:48:35.371742 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:48:35.373801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:48:35.410074 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:48:35.410074 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:48:35.414934 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:48:35.425914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:48:35.428988 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:48:35.446813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:48:35.551681 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:48:35.551811 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:48:35.563329 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:48:35.566682 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:48:35.592801 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:48:35.606763 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:48:35.645946 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:48:35.661625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:48:35.728558 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:48:35.729152 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:48:35.729327 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:48:35.729579 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:48:35.729772 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:48:35.730334 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:48:35.730598 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:48:35.730802 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:48:35.731316 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:48:35.736164 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:48:35.736810 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:48:35.736964 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:48:35.737170 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:48:35.737331 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:48:35.737520 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:48:35.737639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:48:35.737799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:48:35.738254 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:48:35.738452 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:48:35.739058 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:48:35.772946 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:48:35.774705 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:48:35.774920 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:48:35.787244 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:48:35.787448 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:48:35.806620 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:48:35.806921 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:48:35.846799 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:48:35.910224 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:48:35.924427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:48:35.924727 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:48:35.958784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:48:35.966762 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:48:36.055303 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:48:36.060224 ignition[1409]: INFO : Ignition 2.19.0 Nov 12 20:48:36.060224 ignition[1409]: INFO : Stage: umount Nov 12 20:48:36.064781 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:36.064781 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:48:36.064781 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:48:36.083563 ignition[1409]: INFO : PUT result: OK Nov 12 20:48:36.066594 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:48:36.090359 ignition[1409]: INFO : umount: umount passed Nov 12 20:48:36.090359 ignition[1409]: INFO : Ignition finished successfully Nov 12 20:48:36.097829 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:48:36.097959 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:48:36.128675 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:48:36.130261 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:48:36.138615 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:48:36.138697 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:48:36.145085 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:48:36.145173 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:48:36.148709 systemd[1]: Stopped target network.target - Network. Nov 12 20:48:36.150284 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:48:36.150384 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:48:36.152795 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:48:36.154101 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:48:36.170280 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:48:36.174675 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:48:36.174932 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:48:36.175153 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:48:36.175209 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:48:36.175780 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:48:36.175834 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:48:36.176219 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:48:36.176273 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:48:36.190051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:48:36.190136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:48:36.196643 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:48:36.197250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:48:36.207056 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:48:36.300063 systemd-networkd[1167]: eth0: DHCPv6 lease lost Nov 12 20:48:36.317363 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:48:36.319378 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:48:36.333829 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:48:36.334009 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:48:36.348832 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:48:36.349023 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:48:36.356270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:48:36.358942 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:48:36.367579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:48:36.367667 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:48:36.386912 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:48:36.395242 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:48:36.397299 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:48:36.408509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:48:36.408721 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:48:36.414138 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:48:36.414222 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:48:36.422193 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:48:36.422275 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:48:36.431513 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:48:36.480561 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:48:36.489547 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:48:36.500948 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:48:36.501026 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:48:36.527340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:48:36.529138 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:48:36.534969 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:48:36.535049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:48:36.546118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:48:36.546194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:48:36.559709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:48:36.563356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:36.576768 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:48:36.578882 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:48:36.581270 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:48:36.588715 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:48:36.588785 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:48:36.596529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:48:36.596610 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:48:36.605518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:48:36.605763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:36.611506 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:48:36.611651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:48:36.617070 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:48:36.617244 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:48:36.619438 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:48:36.640940 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:48:36.658681 systemd[1]: Switching root. Nov 12 20:48:36.724379 systemd-journald[177]: Journal stopped Nov 12 20:48:39.615798 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Nov 12 20:48:39.615903 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:48:39.615928 kernel: SELinux: policy capability open_perms=1 Nov 12 20:48:39.615955 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:48:39.615976 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:48:39.615997 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:48:39.616019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:48:39.616046 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:48:39.616074 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:48:39.616096 kernel: audit: type=1403 audit(1731444517.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:48:39.616120 systemd[1]: Successfully loaded SELinux policy in 85.713ms. Nov 12 20:48:39.616158 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.653ms. Nov 12 20:48:39.616188 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:48:39.616211 systemd[1]: Detected virtualization amazon. Nov 12 20:48:39.616233 systemd[1]: Detected architecture x86-64. Nov 12 20:48:39.616254 systemd[1]: Detected first boot. Nov 12 20:48:39.616275 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:48:39.616298 zram_generator::config[1452]: No configuration found. Nov 12 20:48:39.616320 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:48:39.616345 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:48:39.616367 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:48:39.616389 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:48:39.616410 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:48:39.616429 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:48:39.616447 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:48:39.616468 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:48:39.616503 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:48:39.616523 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:48:39.616547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:48:39.616567 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:48:39.616586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:48:39.616606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:48:39.616624 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:48:39.616644 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:48:39.616663 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:48:39.616684 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:48:39.616703 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:48:39.616727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:48:39.616745 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:48:39.616764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:48:39.616782 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:48:39.616802 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:48:39.616823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:48:39.616843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:48:39.616863 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:48:39.616887 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:48:39.616905 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:48:39.616922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:48:39.616939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:48:39.616955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:48:39.616971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:48:39.616988 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:48:39.617005 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:48:39.617024 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:48:39.617047 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:48:39.617066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:39.617085 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:48:39.617102 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:48:39.617121 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:48:39.617141 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:48:39.617161 systemd[1]: Reached target machines.target - Containers. Nov 12 20:48:39.617181 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:48:39.617202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:39.617224 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:48:39.617241 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:48:39.617259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:39.617280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:48:39.617300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:48:39.617319 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:48:39.617341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:48:39.617359 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:48:39.617378 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:48:39.617395 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:48:39.617413 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:48:39.617428 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:48:39.617444 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:48:39.617461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:48:39.617502 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:48:39.617528 kernel: loop: module loaded Nov 12 20:48:39.617553 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:48:39.617578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:48:39.617595 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:48:39.617618 systemd[1]: Stopped verity-setup.service. Nov 12 20:48:39.617640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:39.617658 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:48:39.617676 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:48:39.617694 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:48:39.617711 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:48:39.617728 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:48:39.617847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:48:39.617867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:48:39.617886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:48:39.617904 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:48:39.617926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:39.617945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:39.618109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:48:39.618135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:48:39.618156 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:48:39.618176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:48:39.618204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:48:39.618224 kernel: fuse: init (API version 7.39) Nov 12 20:48:39.618326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:48:39.618349 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:48:39.618369 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:48:39.618388 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:48:39.618444 systemd-journald[1531]: Collecting audit messages is disabled. Nov 12 20:48:39.618512 kernel: ACPI: bus type drm_connector registered Nov 12 20:48:39.618537 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:48:39.618563 systemd-journald[1531]: Journal started Nov 12 20:48:39.618603 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec2ad6d4c289e85ac0433912c3545cd7) is 4.8M, max 38.6M, 33.7M free. Nov 12 20:48:38.721140 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:48:38.740871 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 12 20:48:38.741509 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:48:39.631889 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:48:39.631990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:48:39.641352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:48:39.683928 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:48:39.691598 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:48:39.702716 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:48:39.705726 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:48:39.706831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:48:39.709698 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:48:39.714092 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:48:39.722548 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:48:39.769248 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:48:39.769311 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:48:39.778116 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:48:39.792711 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:48:39.808953 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:48:39.810987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:39.821047 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:48:39.826417 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:48:39.830644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:48:39.846747 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:48:39.860340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:48:39.861435 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 12 20:48:39.861456 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 12 20:48:39.865569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:48:39.868351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:48:39.872109 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:48:39.905311 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec2ad6d4c289e85ac0433912c3545cd7 is 215.240ms for 966 entries. Nov 12 20:48:39.905311 systemd-journald[1531]: System Journal (/var/log/journal/ec2ad6d4c289e85ac0433912c3545cd7) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:48:40.141705 systemd-journald[1531]: Received client request to flush runtime journal. Nov 12 20:48:40.141821 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:48:39.913314 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:48:39.916361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:48:39.919599 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:48:39.933384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:48:39.962218 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:48:40.007195 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:48:40.147824 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:48:40.167294 udevadm[1588]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:48:40.240620 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:48:40.244604 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:48:40.316007 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:48:40.340924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:48:40.350919 kernel: loop1: detected capacity change from 0 to 61336 Nov 12 20:48:40.360598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:48:40.416407 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Nov 12 20:48:40.416439 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Nov 12 20:48:40.433259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:48:40.440857 kernel: loop2: detected capacity change from 0 to 140768 Nov 12 20:48:40.602410 kernel: loop3: detected capacity change from 0 to 205544 Nov 12 20:48:40.786993 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:48:40.959547 kernel: loop5: detected capacity change from 0 to 61336 Nov 12 20:48:41.095451 kernel: loop6: detected capacity change from 0 to 140768 Nov 12 20:48:41.172704 kernel: loop7: detected capacity change from 0 to 205544 Nov 12 20:48:41.212635 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 12 20:48:41.218056 (sd-merge)[1607]: Merged extensions into '/usr'. Nov 12 20:48:41.233058 systemd[1]: Reloading requested from client PID 1584 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:48:41.233082 systemd[1]: Reloading... Nov 12 20:48:41.551525 zram_generator::config[1632]: No configuration found. Nov 12 20:48:42.067262 ldconfig[1579]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:48:42.138906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:42.326261 systemd[1]: Reloading finished in 1092 ms. Nov 12 20:48:42.403047 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:48:42.408704 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:48:42.433684 systemd[1]: Starting ensure-sysext.service... Nov 12 20:48:42.438717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:48:42.452815 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:48:42.453176 systemd[1]: Reloading... Nov 12 20:48:42.480620 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:48:42.481151 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:48:42.482471 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:48:42.482964 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Nov 12 20:48:42.483051 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Nov 12 20:48:42.488413 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:48:42.488427 systemd-tmpfiles[1683]: Skipping /boot Nov 12 20:48:42.510763 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:48:42.510782 systemd-tmpfiles[1683]: Skipping /boot Nov 12 20:48:42.580508 zram_generator::config[1709]: No configuration found. Nov 12 20:48:42.900730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:43.005590 systemd[1]: Reloading finished in 551 ms. Nov 12 20:48:43.040054 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:48:43.050230 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:48:43.089531 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:43.105443 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:48:43.109743 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:48:43.136002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:48:43.157678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:48:43.165183 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:48:43.199994 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:43.200330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:43.235017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:43.243601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:48:43.264862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:48:43.278761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:43.278993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:43.304050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:43.305816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:43.306082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:43.310560 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:48:43.316626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:43.341308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:43.341972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:43.351346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:43.352261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:43.376267 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:48:43.390572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:43.390922 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:48:43.401756 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:43.426565 systemd[1]: Finished ensure-sysext.service. Nov 12 20:48:43.508836 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:48:43.524886 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:48:43.533626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:48:43.567339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:48:43.579813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:48:43.585740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:48:43.593952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:48:43.613984 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:48:43.614201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:48:43.663669 systemd-udevd[1767]: Using default interface naming scheme 'v255'. Nov 12 20:48:43.668867 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:48:43.672073 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:48:43.693241 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:48:43.709843 augenrules[1797]: No rules Nov 12 20:48:43.710336 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:43.741283 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:48:43.744425 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:48:43.786041 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:48:43.834955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:48:43.849891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:48:44.042826 systemd-resolved[1765]: Positive Trust Anchors: Nov 12 20:48:44.042847 systemd-resolved[1765]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:48:44.042896 systemd-resolved[1765]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:48:44.055855 systemd-resolved[1765]: Defaulting to hostname 'linux'. Nov 12 20:48:44.062497 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1823) Nov 12 20:48:44.062548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:48:44.064954 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:48:44.065015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:48:44.068534 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1823) Nov 12 20:48:44.089069 (udev-worker)[1810]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:48:44.185065 systemd-networkd[1811]: lo: Link UP Nov 12 20:48:44.185515 systemd-networkd[1811]: lo: Gained carrier Nov 12 20:48:44.190195 systemd-networkd[1811]: Enumeration completed Nov 12 20:48:44.194056 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:48:44.195798 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:44.195803 systemd-networkd[1811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:48:44.199341 systemd[1]: Reached target network.target - Network. Nov 12 20:48:44.210401 systemd-networkd[1811]: eth0: Link UP Nov 12 20:48:44.210682 systemd-networkd[1811]: eth0: Gained carrier Nov 12 20:48:44.210710 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:44.213305 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:48:44.229209 systemd-networkd[1811]: eth0: DHCPv4 address 172.31.17.74/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 20:48:44.262046 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:44.331959 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1810) Nov 12 20:48:44.432586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 20:48:44.440231 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:48:44.440310 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 12 20:48:44.449497 kernel: ACPI: button: Sleep Button [SLPF] Nov 12 20:48:44.506498 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Nov 12 20:48:44.535532 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Nov 12 20:48:44.585539 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:48:44.589745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:44.594105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 20:48:44.600757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:48:44.604643 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:48:44.625816 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:48:44.676611 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:48:44.701325 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:48:44.770833 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:48:44.777087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:48:44.801994 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:48:44.837721 lvm[1928]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:48:44.885420 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:48:45.155385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:45.164829 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:48:45.168787 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:48:45.172958 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:48:45.176386 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:48:45.179678 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:48:45.183001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:48:45.187005 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:48:45.187063 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:48:45.188413 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:48:45.195541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:48:45.198886 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:48:45.228673 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:48:45.231274 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:48:45.238189 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:48:45.244039 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:48:45.245940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:48:45.245979 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:48:45.255733 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:48:45.266834 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:48:45.272672 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:48:45.299973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:48:45.303789 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:48:45.309134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:48:45.312365 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:48:45.344255 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 20:48:45.348675 jq[1938]: false Nov 12 20:48:45.359019 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:48:45.383762 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 20:48:45.393052 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:48:45.443466 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:48:45.481662 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:48:45.489435 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:48:45.492744 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:48:45.518275 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:48:45.554981 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:48:45.583435 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:48:45.584847 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:48:45.650300 jq[1951]: true Nov 12 20:48:45.715716 extend-filesystems[1939]: Found loop4 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found loop5 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found loop6 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found loop7 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found nvme0n1 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found nvme0n1p1 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found nvme0n1p2 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found nvme0n1p3 Nov 12 20:48:45.715716 extend-filesystems[1939]: Found usr Nov 12 20:48:45.715716 extend-filesystems[1939]: Found nvme0n1p4 Nov 12 20:48:45.752830 extend-filesystems[1939]: Found nvme0n1p6 Nov 12 20:48:45.752830 extend-filesystems[1939]: Found nvme0n1p7 Nov 12 20:48:45.752830 extend-filesystems[1939]: Found nvme0n1p9 Nov 12 20:48:45.752830 extend-filesystems[1939]: Checking size of /dev/nvme0n1p9 Nov 12 20:48:45.779209 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:48:45.779469 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:48:45.828937 ntpd[1941]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: ---------------------------------------------------- Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: corporation. Support and training for ntp-4 are Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: available at https://www.nwtime.org/support Nov 12 20:48:45.829886 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: ---------------------------------------------------- Nov 12 20:48:45.828975 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:48:45.828987 ntpd[1941]: ---------------------------------------------------- Nov 12 20:48:45.828996 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:48:45.829007 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:48:45.829020 ntpd[1941]: corporation. Support and training for ntp-4 are Nov 12 20:48:45.829030 ntpd[1941]: available at https://www.nwtime.org/support Nov 12 20:48:45.829039 ntpd[1941]: ---------------------------------------------------- Nov 12 20:48:45.863690 jq[1959]: true Nov 12 20:48:45.905332 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: proto: precision = 0.095 usec (-23) Nov 12 20:48:45.905332 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: basedate set to 2024-10-31 Nov 12 20:48:45.905332 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: gps base set to 2024-11-03 (week 2339) Nov 12 20:48:45.905332 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:48:45.905332 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:48:45.905332 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:48:45.862162 ntpd[1941]: proto: precision = 0.095 usec (-23) Nov 12 20:48:45.866079 systemd-networkd[1811]: eth0: Gained IPv6LL Nov 12 20:48:45.862552 ntpd[1941]: basedate set to 2024-10-31 Nov 12 20:48:45.899134 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:48:45.862569 ntpd[1941]: gps base set to 2024-11-03 (week 2339) Nov 12 20:48:45.893183 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.910 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.913 INFO Fetch successful Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.915 INFO Fetch successful Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.917 INFO Fetch successful Nov 12 20:48:45.922849 coreos-metadata[1936]: Nov 12 20:48:45.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 12 20:48:45.928167 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listen normally on 3 eth0 172.31.17.74:123 Nov 12 20:48:45.928167 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listen normally on 4 lo [::1]:123 Nov 12 20:48:45.928167 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listen normally on 5 eth0 [fe80::4ec:d8ff:fe86:6a9b%2]:123 Nov 12 20:48:45.928167 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: Listening on routing socket on fd #22 for interface updates Nov 12 20:48:45.893251 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:48:45.926892 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:48:45.935890 coreos-metadata[1936]: Nov 12 20:48:45.925 INFO Fetch successful Nov 12 20:48:45.935890 coreos-metadata[1936]: Nov 12 20:48:45.925 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 12 20:48:45.893459 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:48:45.918582 ntpd[1941]: Listen normally on 3 eth0 172.31.17.74:123 Nov 12 20:48:45.918712 ntpd[1941]: Listen normally on 4 lo [::1]:123 Nov 12 20:48:45.919000 ntpd[1941]: Listen normally on 5 eth0 [fe80::4ec:d8ff:fe86:6a9b%2]:123 Nov 12 20:48:45.919056 ntpd[1941]: Listening on routing socket on fd #22 for interface updates Nov 12 20:48:45.932589 dbus-daemon[1937]: [system] SELinux support is enabled Nov 12 20:48:45.937113 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:48:45.950228 update_engine[1950]: I20241112 20:48:45.946147 1950 main.cc:92] Flatcar Update Engine starting Nov 12 20:48:45.950433 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:48:45.950433 ntpd[1941]: 12 Nov 20:48:45 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:48:45.941191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:45.963971 coreos-metadata[1936]: Nov 12 20:48:45.940 INFO Fetch failed with 404: resource not found Nov 12 20:48:45.963971 coreos-metadata[1936]: Nov 12 20:48:45.940 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 12 20:48:45.963971 coreos-metadata[1936]: Nov 12 20:48:45.944 INFO Fetch successful Nov 12 20:48:45.963971 coreos-metadata[1936]: Nov 12 20:48:45.944 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 12 20:48:45.963971 coreos-metadata[1936]: Nov 12 20:48:45.954 INFO Fetch successful Nov 12 20:48:45.963971 coreos-metadata[1936]: Nov 12 20:48:45.954 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 12 20:48:45.937145 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:48:45.948792 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:48:45.940842 dbus-daemon[1937]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1811 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 20:48:45.952147 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:48:45.970102 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:48:45.981072 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 20:48:45.981674 extend-filesystems[1939]: Resized partition /dev/nvme0n1p9 Nov 12 20:48:45.991712 coreos-metadata[1936]: Nov 12 20:48:45.974 INFO Fetch successful Nov 12 20:48:45.991712 coreos-metadata[1936]: Nov 12 20:48:45.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 12 20:48:45.991712 coreos-metadata[1936]: Nov 12 20:48:45.982 INFO Fetch successful Nov 12 20:48:45.991712 coreos-metadata[1936]: Nov 12 20:48:45.982 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 12 20:48:45.991712 coreos-metadata[1936]: Nov 12 20:48:45.988 INFO Fetch successful Nov 12 20:48:45.970194 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:48:45.992193 extend-filesystems[1990]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:48:45.973047 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:48:45.973077 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:48:46.002617 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 20:48:46.018536 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Nov 12 20:48:46.018631 tar[1953]: linux-amd64/helm Nov 12 20:48:46.005741 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:48:46.029003 update_engine[1950]: I20241112 20:48:46.012863 1950 update_check_scheduler.cc:74] Next update check in 2m46s Nov 12 20:48:46.006147 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:48:46.008126 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 20:48:46.011467 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:48:46.012384 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:48:46.022164 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 12 20:48:46.026803 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:48:46.151650 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Nov 12 20:48:46.216433 extend-filesystems[1990]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 12 20:48:46.216433 extend-filesystems[1990]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:48:46.216433 extend-filesystems[1990]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Nov 12 20:48:46.217846 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:48:46.250402 extend-filesystems[1939]: Resized filesystem in /dev/nvme0n1p9 Nov 12 20:48:46.231183 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:48:46.231599 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:48:46.236735 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:48:46.289266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:48:46.324555 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1810) Nov 12 20:48:46.368215 bash[2032]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:48:46.364604 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:48:46.378732 systemd[1]: Starting sshkeys.service... Nov 12 20:48:46.382360 systemd-logind[1946]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:48:46.382385 systemd-logind[1946]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 12 20:48:46.382408 systemd-logind[1946]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:48:46.386312 systemd-logind[1946]: New seat seat0. Nov 12 20:48:46.393171 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:48:46.420748 amazon-ssm-agent[1995]: Initializing new seelog logger Nov 12 20:48:46.428574 amazon-ssm-agent[1995]: New Seelog Logger Creation Complete Nov 12 20:48:46.428735 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.428735 amazon-ssm-agent[1995]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.429388 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 processing appconfig overrides Nov 12 20:48:46.436102 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.436102 amazon-ssm-agent[1995]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.436102 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 processing appconfig overrides Nov 12 20:48:46.436698 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.436698 amazon-ssm-agent[1995]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.454498 amazon-ssm-agent[1995]: 2024-11-12 20:48:46 INFO Proxy environment variables: Nov 12 20:48:46.454498 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 processing appconfig overrides Nov 12 20:48:46.463666 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:48:46.476389 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:48:46.479385 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.479385 amazon-ssm-agent[1995]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:48:46.479385 amazon-ssm-agent[1995]: 2024/11/12 20:48:46 processing appconfig overrides Nov 12 20:48:46.565871 amazon-ssm-agent[1995]: 2024-11-12 20:48:46 INFO http_proxy: Nov 12 20:48:46.703459 amazon-ssm-agent[1995]: 2024-11-12 20:48:46 INFO no_proxy: Nov 12 20:48:46.729141 coreos-metadata[2067]: Nov 12 20:48:46.727 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 20:48:46.730141 coreos-metadata[2067]: Nov 12 20:48:46.730 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 12 20:48:46.730889 coreos-metadata[2067]: Nov 12 20:48:46.730 INFO Fetch successful Nov 12 20:48:46.730976 coreos-metadata[2067]: Nov 12 20:48:46.730 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 12 20:48:46.733649 coreos-metadata[2067]: Nov 12 20:48:46.731 INFO Fetch successful Nov 12 20:48:46.760709 unknown[2067]: wrote ssh authorized keys file for user: core Nov 12 20:48:46.805881 amazon-ssm-agent[1995]: 2024-11-12 20:48:46 INFO https_proxy: Nov 12 20:48:46.866110 update-ssh-keys[2108]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:48:46.862640 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:48:46.870341 systemd[1]: Finished sshkeys.service. Nov 12 20:48:46.942720 amazon-ssm-agent[1995]: 2024-11-12 20:48:46 INFO Checking if agent identity type OnPrem can be assumed Nov 12 20:48:47.038518 amazon-ssm-agent[1995]: 2024-11-12 20:48:46 INFO Checking if agent identity type EC2 can be assumed Nov 12 20:48:47.151023 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 20:48:47.151220 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 20:48:47.177533 dbus-daemon[1937]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1992 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 20:48:47.187960 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 20:48:47.207559 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:48:47.243028 polkitd[2146]: Started polkitd version 121 Nov 12 20:48:47.265303 polkitd[2146]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 20:48:47.267559 polkitd[2146]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 20:48:47.272280 polkitd[2146]: Finished loading, compiling and executing 2 rules Nov 12 20:48:47.277018 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO Agent will take identity from EC2 Nov 12 20:48:47.277580 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 20:48:47.277885 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 20:48:47.283996 polkitd[2146]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 20:48:47.336144 systemd-hostnamed[1992]: Hostname set to <ip-172-31-17-74> (transient) Nov 12 20:48:47.336640 systemd-resolved[1765]: System hostname changed to 'ip-172-31-17-74'. Nov 12 20:48:47.377677 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 20:48:47.477634 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 20:48:47.528122 containerd[1977]: time="2024-11-12T20:48:47.528011138Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:48:47.581000 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 20:48:47.681245 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 12 20:48:47.783833 containerd[1977]: time="2024-11-12T20:48:47.781182749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.791309 containerd[1977]: time="2024-11-12T20:48:47.791246908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:47.792138 containerd[1977]: time="2024-11-12T20:48:47.792106003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:48:47.792742 containerd[1977]: time="2024-11-12T20:48:47.792233780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:48:47.792742 containerd[1977]: time="2024-11-12T20:48:47.792428820Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:48:47.792742 containerd[1977]: time="2024-11-12T20:48:47.792452501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.792742 containerd[1977]: time="2024-11-12T20:48:47.792561376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:47.792742 containerd[1977]: time="2024-11-12T20:48:47.792580462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.792957 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 12 20:48:47.794593 containerd[1977]: time="2024-11-12T20:48:47.794058958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:47.794593 containerd[1977]: time="2024-11-12T20:48:47.794097615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.794593 containerd[1977]: time="2024-11-12T20:48:47.794125469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:47.794593 containerd[1977]: time="2024-11-12T20:48:47.794140690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.794593 containerd[1977]: time="2024-11-12T20:48:47.794267718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.794593 containerd[1977]: time="2024-11-12T20:48:47.794552689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:47.807060 containerd[1977]: time="2024-11-12T20:48:47.801154772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:47.807060 containerd[1977]: time="2024-11-12T20:48:47.801203446Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:48:47.807060 containerd[1977]: time="2024-11-12T20:48:47.801380941Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:48:47.807060 containerd[1977]: time="2024-11-12T20:48:47.801440129Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.833027477Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.833182065Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.833213651Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.833266864Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.833289787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.833537296Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:48:47.834131 containerd[1977]: time="2024-11-12T20:48:47.834047372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:48:47.834657 containerd[1977]: time="2024-11-12T20:48:47.834627045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.834877580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.834906762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.834946408Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.834971710Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.834990140Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.835024797Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.835049425Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.835087 containerd[1977]: time="2024-11-12T20:48:47.835068694Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837511346Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837544378Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837593243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837616098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837649313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837669961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837688248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837707443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837742561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837762270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837782421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837819366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837839835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.838653 containerd[1977]: time="2024-11-12T20:48:47.837858935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.837892032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.837917088Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.837965296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.837983185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.837999232Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838089577Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838189473Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838206676Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838224396Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838252322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838271273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838291546Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:48:47.843388 containerd[1977]: time="2024-11-12T20:48:47.838305982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:48:47.861869 containerd[1977]: time="2024-11-12T20:48:47.844199185Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:48:47.861869 containerd[1977]: time="2024-11-12T20:48:47.846313068Z" level=info msg="Connect containerd service" Nov 12 20:48:47.861869 containerd[1977]: time="2024-11-12T20:48:47.846432174Z" level=info msg="using legacy CRI server" Nov 12 20:48:47.861869 containerd[1977]: time="2024-11-12T20:48:47.846445054Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:48:47.861869 containerd[1977]: time="2024-11-12T20:48:47.846672153Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:48:47.861869 containerd[1977]: time="2024-11-12T20:48:47.856441859Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:48:47.863514 containerd[1977]: time="2024-11-12T20:48:47.862985432Z" level=info msg="Start subscribing containerd event" Nov 12 20:48:47.863514 containerd[1977]: time="2024-11-12T20:48:47.863080547Z" level=info msg="Start recovering state" Nov 12 20:48:47.863514 containerd[1977]: time="2024-11-12T20:48:47.863188270Z" level=info msg="Start event monitor" Nov 12 20:48:47.863514 containerd[1977]: time="2024-11-12T20:48:47.863203479Z" level=info msg="Start snapshots syncer" Nov 12 20:48:47.863514 containerd[1977]: time="2024-11-12T20:48:47.863217350Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:48:47.863514 containerd[1977]: time="2024-11-12T20:48:47.863227695Z" level=info msg="Start streaming server" Nov 12 20:48:47.864590 containerd[1977]: time="2024-11-12T20:48:47.864559413Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:48:47.872100 containerd[1977]: time="2024-11-12T20:48:47.869298742Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:48:47.869535 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:48:47.880652 containerd[1977]: time="2024-11-12T20:48:47.872895079Z" level=info msg="containerd successfully booted in 0.346760s" Nov 12 20:48:47.904108 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] Starting Core Agent Nov 12 20:48:48.003716 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 12 20:48:48.104891 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [Registrar] Starting registrar module Nov 12 20:48:48.207066 amazon-ssm-agent[1995]: 2024-11-12 20:48:47 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 12 20:48:48.414210 amazon-ssm-agent[1995]: 2024-11-12 20:48:48 INFO [EC2Identity] EC2 registration was successful. Nov 12 20:48:48.451433 amazon-ssm-agent[1995]: 2024-11-12 20:48:48 INFO [CredentialRefresher] credentialRefresher has started Nov 12 20:48:48.451433 amazon-ssm-agent[1995]: 2024-11-12 20:48:48 INFO [CredentialRefresher] Starting credentials refresher loop Nov 12 20:48:48.451433 amazon-ssm-agent[1995]: 2024-11-12 20:48:48 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 12 20:48:48.524022 amazon-ssm-agent[1995]: 2024-11-12 20:48:48 INFO [CredentialRefresher] Next credential rotation will be in 30.40832829635 minutes Nov 12 20:48:48.636597 tar[1953]: linux-amd64/LICENSE Nov 12 20:48:48.644419 tar[1953]: linux-amd64/README.md Nov 12 20:48:48.693258 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:48:49.099720 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:48:49.222640 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:48:49.249978 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:48:49.307647 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:48:49.308084 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:48:49.337958 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:48:49.422390 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:48:49.472355 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:48:49.509012 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:48:49.513568 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:48:49.519828 amazon-ssm-agent[1995]: 2024-11-12 20:48:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 12 20:48:49.622343 amazon-ssm-agent[1995]: 2024-11-12 20:48:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2181) started Nov 12 20:48:49.725406 amazon-ssm-agent[1995]: 2024-11-12 20:48:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 12 20:48:49.738044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:49.741749 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:48:49.749663 systemd[1]: Startup finished in 1.066s (kernel) + 11.443s (initrd) + 12.557s (userspace) = 25.067s. Nov 12 20:48:49.770352 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:51.530505 kubelet[2190]: E1112 20:48:51.530345 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:51.539729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:51.540338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:51.541553 systemd[1]: kubelet.service: Consumed 1.055s CPU time. Nov 12 20:48:53.368045 systemd-resolved[1765]: Clock change detected. Flushing caches. Nov 12 20:48:55.325547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:48:55.335398 systemd[1]: Started sshd@0-172.31.17.74:22-139.178.89.65:38336.service - OpenSSH per-connection server daemon (139.178.89.65:38336). Nov 12 20:48:55.546322 sshd[2209]: Accepted publickey for core from 139.178.89.65 port 38336 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:55.550357 sshd[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:55.594432 systemd-logind[1946]: New session 1 of user core. Nov 12 20:48:55.600064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:48:55.609309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:48:55.687830 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:48:55.710083 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:48:55.732285 (systemd)[2213]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:48:56.002032 systemd[2213]: Queued start job for default target default.target. Nov 12 20:48:56.011122 systemd[2213]: Created slice app.slice - User Application Slice. Nov 12 20:48:56.011172 systemd[2213]: Reached target paths.target - Paths. Nov 12 20:48:56.011193 systemd[2213]: Reached target timers.target - Timers. Nov 12 20:48:56.015193 systemd[2213]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:48:56.073517 systemd[2213]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:48:56.073835 systemd[2213]: Reached target sockets.target - Sockets. Nov 12 20:48:56.074011 systemd[2213]: Reached target basic.target - Basic System. Nov 12 20:48:56.074076 systemd[2213]: Reached target default.target - Main User Target. Nov 12 20:48:56.074111 systemd[2213]: Startup finished in 324ms. Nov 12 20:48:56.074285 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:48:56.094164 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:48:56.280324 systemd[1]: Started sshd@1-172.31.17.74:22-139.178.89.65:38340.service - OpenSSH per-connection server daemon (139.178.89.65:38340). Nov 12 20:48:56.482843 sshd[2224]: Accepted publickey for core from 139.178.89.65 port 38340 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:56.485410 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:56.499904 systemd-logind[1946]: New session 2 of user core. Nov 12 20:48:56.511776 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:48:56.660704 sshd[2224]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:56.671905 systemd[1]: sshd@1-172.31.17.74:22-139.178.89.65:38340.service: Deactivated successfully. Nov 12 20:48:56.678428 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:48:56.683553 systemd-logind[1946]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:48:56.728622 systemd[1]: Started sshd@2-172.31.17.74:22-139.178.89.65:38350.service - OpenSSH per-connection server daemon (139.178.89.65:38350). Nov 12 20:48:56.732119 systemd-logind[1946]: Removed session 2. Nov 12 20:48:56.925096 sshd[2231]: Accepted publickey for core from 139.178.89.65 port 38350 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:56.927077 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:56.944293 systemd-logind[1946]: New session 3 of user core. Nov 12 20:48:56.951147 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:48:57.076156 sshd[2231]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:57.080917 systemd[1]: sshd@2-172.31.17.74:22-139.178.89.65:38350.service: Deactivated successfully. Nov 12 20:48:57.087850 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:48:57.090671 systemd-logind[1946]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:48:57.097205 systemd-logind[1946]: Removed session 3. Nov 12 20:48:57.124421 systemd[1]: Started sshd@3-172.31.17.74:22-139.178.89.65:52222.service - OpenSSH per-connection server daemon (139.178.89.65:52222). Nov 12 20:48:57.329778 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 52222 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:57.333475 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:57.349099 systemd-logind[1946]: New session 4 of user core. Nov 12 20:48:57.361167 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:48:57.523855 sshd[2238]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:57.537123 systemd[1]: sshd@3-172.31.17.74:22-139.178.89.65:52222.service: Deactivated successfully. Nov 12 20:48:57.543673 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:48:57.547036 systemd-logind[1946]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:48:57.588373 systemd[1]: Started sshd@4-172.31.17.74:22-139.178.89.65:52228.service - OpenSSH per-connection server daemon (139.178.89.65:52228). Nov 12 20:48:57.592877 systemd-logind[1946]: Removed session 4. Nov 12 20:48:57.804639 sshd[2245]: Accepted publickey for core from 139.178.89.65 port 52228 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:57.806819 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:57.818961 systemd-logind[1946]: New session 5 of user core. Nov 12 20:48:57.828153 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:48:57.961811 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:48:57.962379 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:57.992217 sudo[2248]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:58.016830 sshd[2245]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:58.050819 systemd[1]: sshd@4-172.31.17.74:22-139.178.89.65:52228.service: Deactivated successfully. Nov 12 20:48:58.069994 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:48:58.075422 systemd-logind[1946]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:48:58.099399 systemd[1]: Started sshd@5-172.31.17.74:22-139.178.89.65:52234.service - OpenSSH per-connection server daemon (139.178.89.65:52234). Nov 12 20:48:58.110972 systemd-logind[1946]: Removed session 5. Nov 12 20:48:58.329554 sshd[2253]: Accepted publickey for core from 139.178.89.65 port 52234 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:58.331359 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:58.349898 systemd-logind[1946]: New session 6 of user core. Nov 12 20:48:58.361388 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:48:58.494902 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:48:58.495432 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:58.515874 sudo[2257]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:58.551800 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:48:58.552411 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:58.602346 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:58.637335 auditctl[2260]: No rules Nov 12 20:48:58.638096 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:48:58.638334 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:58.651070 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:58.805143 augenrules[2278]: No rules Nov 12 20:48:58.810514 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:58.815631 sudo[2256]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:58.844147 sshd[2253]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:58.854950 systemd[1]: sshd@5-172.31.17.74:22-139.178.89.65:52234.service: Deactivated successfully. Nov 12 20:48:58.861416 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:48:58.866941 systemd-logind[1946]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:48:58.889690 systemd[1]: Started sshd@6-172.31.17.74:22-139.178.89.65:52236.service - OpenSSH per-connection server daemon (139.178.89.65:52236). Nov 12 20:48:58.900103 systemd-logind[1946]: Removed session 6. Nov 12 20:48:59.082975 sshd[2286]: Accepted publickey for core from 139.178.89.65 port 52236 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:48:59.085988 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:59.095628 systemd-logind[1946]: New session 7 of user core. Nov 12 20:48:59.107163 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:48:59.218669 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:48:59.219518 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:49:00.477339 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:49:00.499442 (dockerd)[2305]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:49:01.499113 dockerd[2305]: time="2024-11-12T20:49:01.499044124Z" level=info msg="Starting up" Nov 12 20:49:01.747334 dockerd[2305]: time="2024-11-12T20:49:01.747285499Z" level=info msg="Loading containers: start." Nov 12 20:49:02.344467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:49:02.394334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:02.623669 kernel: Initializing XFRM netlink socket Nov 12 20:49:02.845926 (udev-worker)[2325]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:49:03.042193 systemd-networkd[1811]: docker0: Link UP Nov 12 20:49:03.089184 dockerd[2305]: time="2024-11-12T20:49:03.089135077Z" level=info msg="Loading containers: done." Nov 12 20:49:03.187785 dockerd[2305]: time="2024-11-12T20:49:03.187659256Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:49:03.188008 dockerd[2305]: time="2024-11-12T20:49:03.187800185Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:49:03.188008 dockerd[2305]: time="2024-11-12T20:49:03.187980589Z" level=info msg="Daemon has completed initialization" Nov 12 20:49:03.270292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:03.284470 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:49:03.308913 dockerd[2305]: time="2024-11-12T20:49:03.308629899Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:49:03.309230 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:49:03.418427 kubelet[2426]: E1112 20:49:03.413950 2426 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:49:03.428598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:49:03.428781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:49:04.714276 containerd[1977]: time="2024-11-12T20:49:04.713853861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:49:05.488509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359471300.mount: Deactivated successfully. Nov 12 20:49:08.468356 containerd[1977]: time="2024-11-12T20:49:08.468306060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:08.476448 containerd[1977]: time="2024-11-12T20:49:08.476377987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 12 20:49:08.485716 containerd[1977]: time="2024-11-12T20:49:08.485660942Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:08.499879 containerd[1977]: time="2024-11-12T20:49:08.499827098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:08.504708 containerd[1977]: time="2024-11-12T20:49:08.503506180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 3.789220593s" Nov 12 20:49:08.504708 containerd[1977]: time="2024-11-12T20:49:08.503563691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:49:08.508116 containerd[1977]: time="2024-11-12T20:49:08.508074693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:49:12.492581 containerd[1977]: time="2024-11-12T20:49:12.492525516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:12.495834 containerd[1977]: time="2024-11-12T20:49:12.495666559Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 12 20:49:12.508942 containerd[1977]: time="2024-11-12T20:49:12.505461246Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:12.521302 containerd[1977]: time="2024-11-12T20:49:12.521246243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:12.524084 containerd[1977]: time="2024-11-12T20:49:12.524037772Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 4.015910488s" Nov 12 20:49:12.524332 containerd[1977]: time="2024-11-12T20:49:12.524306032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:49:12.525434 containerd[1977]: time="2024-11-12T20:49:12.525395132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:49:13.679338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:49:13.705816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:14.283157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:14.323643 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:49:14.526363 kubelet[2528]: E1112 20:49:14.526303 2528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:49:14.532167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:49:14.532371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:49:15.406891 containerd[1977]: time="2024-11-12T20:49:15.406827509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:15.408633 containerd[1977]: time="2024-11-12T20:49:15.408576071Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 12 20:49:15.410647 containerd[1977]: time="2024-11-12T20:49:15.410216306Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:15.414822 containerd[1977]: time="2024-11-12T20:49:15.414768987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:15.416179 containerd[1977]: time="2024-11-12T20:49:15.416132189Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 2.890699746s" Nov 12 20:49:15.416302 containerd[1977]: time="2024-11-12T20:49:15.416183949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:49:15.417470 containerd[1977]: time="2024-11-12T20:49:15.417435742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:49:17.034758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount459599530.mount: Deactivated successfully. Nov 12 20:49:17.909519 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 20:49:17.941218 containerd[1977]: time="2024-11-12T20:49:17.941159593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:17.946502 containerd[1977]: time="2024-11-12T20:49:17.946253797Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 12 20:49:17.951033 containerd[1977]: time="2024-11-12T20:49:17.950979938Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:17.956550 containerd[1977]: time="2024-11-12T20:49:17.956496270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:17.958224 containerd[1977]: time="2024-11-12T20:49:17.957530352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 2.539481214s" Nov 12 20:49:17.958224 containerd[1977]: time="2024-11-12T20:49:17.957582494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:49:17.958681 containerd[1977]: time="2024-11-12T20:49:17.958607462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:49:18.545749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321577087.mount: Deactivated successfully. Nov 12 20:49:20.091065 containerd[1977]: time="2024-11-12T20:49:20.091002975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:20.092570 containerd[1977]: time="2024-11-12T20:49:20.092356558Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:49:20.094704 containerd[1977]: time="2024-11-12T20:49:20.094202269Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:20.100178 containerd[1977]: time="2024-11-12T20:49:20.100123463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:20.101623 containerd[1977]: time="2024-11-12T20:49:20.101572003Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.142926391s" Nov 12 20:49:20.102065 containerd[1977]: time="2024-11-12T20:49:20.101628306Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:49:20.102374 containerd[1977]: time="2024-11-12T20:49:20.102319593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:49:20.632246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419839453.mount: Deactivated successfully. Nov 12 20:49:20.645761 containerd[1977]: time="2024-11-12T20:49:20.645703904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:20.647224 containerd[1977]: time="2024-11-12T20:49:20.647042491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 12 20:49:20.649344 containerd[1977]: time="2024-11-12T20:49:20.648581557Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:20.652637 containerd[1977]: time="2024-11-12T20:49:20.652569923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:20.654320 containerd[1977]: time="2024-11-12T20:49:20.653653297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 551.279236ms" Nov 12 20:49:20.654320 containerd[1977]: time="2024-11-12T20:49:20.653697983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:49:20.654695 containerd[1977]: time="2024-11-12T20:49:20.654667746Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:49:21.339689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133751249.mount: Deactivated successfully. Nov 12 20:49:24.771873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:49:24.788224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:25.596174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:25.613652 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:49:25.740370 kubelet[2655]: E1112 20:49:25.740333 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:49:25.744330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:49:25.744526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:49:26.077999 containerd[1977]: time="2024-11-12T20:49:26.077927516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:26.079691 containerd[1977]: time="2024-11-12T20:49:26.079636316Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 12 20:49:26.081572 containerd[1977]: time="2024-11-12T20:49:26.080959994Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:26.088417 containerd[1977]: time="2024-11-12T20:49:26.088362569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:26.089804 containerd[1977]: time="2024-11-12T20:49:26.089756045Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.435053144s" Nov 12 20:49:26.089945 containerd[1977]: time="2024-11-12T20:49:26.089810448Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:49:30.157662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:30.172011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:30.212623 systemd[1]: Reloading requested from client PID 2690 ('systemctl') (unit session-7.scope)... Nov 12 20:49:30.212641 systemd[1]: Reloading... Nov 12 20:49:30.324916 zram_generator::config[2727]: No configuration found. Nov 12 20:49:30.610207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:49:30.909368 systemd[1]: Reloading finished in 696 ms. Nov 12 20:49:31.051214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:31.051831 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:49:31.060387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:31.062165 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:49:31.062514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:31.071339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:31.494128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:31.507511 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:49:31.657371 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:31.657371 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:49:31.657371 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:31.659730 kubelet[2798]: I1112 20:49:31.659662 2798 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:49:32.134928 kubelet[2798]: I1112 20:49:32.134862 2798 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:49:32.134928 kubelet[2798]: I1112 20:49:32.134928 2798 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:49:32.135412 kubelet[2798]: I1112 20:49:32.135379 2798 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:49:32.204570 kubelet[2798]: I1112 20:49:32.203938 2798 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:49:32.204909 kubelet[2798]: E1112 20:49:32.204866 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:32.230719 kubelet[2798]: E1112 20:49:32.230554 2798 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:49:32.231934 kubelet[2798]: I1112 20:49:32.231833 2798 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:49:32.235812 update_engine[1950]: I20241112 20:49:32.234990 1950 update_attempter.cc:509] Updating boot flags... Nov 12 20:49:32.239572 kubelet[2798]: I1112 20:49:32.239448 2798 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:49:32.242586 kubelet[2798]: I1112 20:49:32.241814 2798 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:49:32.242586 kubelet[2798]: I1112 20:49:32.242041 2798 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:49:32.242586 kubelet[2798]: I1112 20:49:32.242081 2798 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:49:32.242586 kubelet[2798]: I1112 20:49:32.242330 2798 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:49:32.242821 kubelet[2798]: I1112 20:49:32.242344 2798 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:49:32.242821 kubelet[2798]: I1112 20:49:32.242464 2798 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:32.245907 kubelet[2798]: I1112 20:49:32.245869 2798 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:49:32.246041 kubelet[2798]: I1112 20:49:32.246032 2798 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:49:32.246141 kubelet[2798]: I1112 20:49:32.246132 2798 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:49:32.247041 kubelet[2798]: I1112 20:49:32.247022 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:49:32.276364 kubelet[2798]: W1112 20:49:32.275778 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-74&limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:32.276364 kubelet[2798]: E1112 20:49:32.275865 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-74&limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:32.276364 kubelet[2798]: W1112 20:49:32.275991 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:32.276364 kubelet[2798]: E1112 20:49:32.276036 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:32.283384 kubelet[2798]: I1112 20:49:32.283218 2798 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:49:32.285702 kubelet[2798]: I1112 20:49:32.285678 2798 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:49:32.285896 kubelet[2798]: W1112 20:49:32.285869 2798 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:49:32.294084 kubelet[2798]: I1112 20:49:32.286441 2798 server.go:1269] "Started kubelet" Nov 12 20:49:32.295612 kubelet[2798]: I1112 20:49:32.295564 2798 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:49:32.299909 kubelet[2798]: I1112 20:49:32.299201 2798 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:49:32.304215 kubelet[2798]: I1112 20:49:32.304150 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:49:32.304588 kubelet[2798]: I1112 20:49:32.304561 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:49:32.305147 kubelet[2798]: I1112 20:49:32.305117 2798 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:49:32.325007 kubelet[2798]: E1112 20:49:32.307691 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.74:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-74.180753aa376903c6 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-74,UID:ip-172-31-17-74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-74,},FirstTimestamp:2024-11-12 20:49:32.28641991 +0000 UTC m=+0.771403901,LastTimestamp:2024-11-12 20:49:32.28641991 +0000 UTC m=+0.771403901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-74,}" Nov 12 20:49:32.325007 kubelet[2798]: I1112 20:49:32.324708 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:49:32.333055 kubelet[2798]: I1112 20:49:32.332557 2798 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:49:32.336042 kubelet[2798]: E1112 20:49:32.333295 2798 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-74\" not found" Nov 12 20:49:32.347026 kubelet[2798]: I1112 20:49:32.346990 2798 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:49:32.347815 kubelet[2798]: I1112 20:49:32.347271 2798 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:49:32.355504 kubelet[2798]: E1112 20:49:32.352542 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": dial tcp 172.31.17.74:6443: connect: connection refused" interval="200ms" Nov 12 20:49:32.358541 kubelet[2798]: W1112 20:49:32.355987 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:32.358651 kubelet[2798]: E1112 20:49:32.358565 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:32.358651 kubelet[2798]: I1112 20:49:32.357777 2798 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:49:32.358748 kubelet[2798]: I1112 20:49:32.358674 2798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:49:32.358825 kubelet[2798]: E1112 20:49:32.358262 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.74:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-74.180753aa376903c6 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-74,UID:ip-172-31-17-74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-74,},FirstTimestamp:2024-11-12 20:49:32.28641991 +0000 UTC m=+0.771403901,LastTimestamp:2024-11-12 20:49:32.28641991 +0000 UTC m=+0.771403901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-74,}" Nov 12 20:49:32.361210 kubelet[2798]: E1112 20:49:32.361185 2798 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:49:32.364173 kubelet[2798]: I1112 20:49:32.364143 2798 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:49:32.379044 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2823) Nov 12 20:49:32.419999 kubelet[2798]: I1112 20:49:32.418056 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:49:32.434963 kubelet[2798]: I1112 20:49:32.434904 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:49:32.434963 kubelet[2798]: I1112 20:49:32.434947 2798 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:49:32.434963 kubelet[2798]: I1112 20:49:32.434969 2798 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:49:32.435155 kubelet[2798]: E1112 20:49:32.435027 2798 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:49:32.441790 kubelet[2798]: E1112 20:49:32.440843 2798 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-74\" not found" Nov 12 20:49:32.442408 kubelet[2798]: I1112 20:49:32.442377 2798 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:49:32.442408 kubelet[2798]: I1112 20:49:32.442409 2798 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:49:32.442525 kubelet[2798]: I1112 20:49:32.442434 2798 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:32.449865 kubelet[2798]: I1112 20:49:32.449832 2798 policy_none.go:49] "None policy: Start" Nov 12 20:49:32.451176 kubelet[2798]: W1112 20:49:32.451106 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:32.451341 kubelet[2798]: E1112 20:49:32.451176 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:32.453362 kubelet[2798]: I1112 20:49:32.453024 2798 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:49:32.453362 kubelet[2798]: I1112 20:49:32.453053 2798 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:49:32.471325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:49:32.488197 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:49:32.496212 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:49:32.503317 kubelet[2798]: I1112 20:49:32.503282 2798 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:49:32.503538 kubelet[2798]: I1112 20:49:32.503519 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:49:32.503648 kubelet[2798]: I1112 20:49:32.503548 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:49:32.523757 kubelet[2798]: I1112 20:49:32.523732 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:49:32.531296 kubelet[2798]: E1112 20:49:32.531089 2798 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-74\" not found" Nov 12 20:49:32.557999 kubelet[2798]: I1112 20:49:32.557873 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77f46a4f56ad8ecad6a9658cd93ab4c1-ca-certs\") pod \"kube-apiserver-ip-172-31-17-74\" (UID: \"77f46a4f56ad8ecad6a9658cd93ab4c1\") " pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:32.557999 kubelet[2798]: I1112 20:49:32.557937 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77f46a4f56ad8ecad6a9658cd93ab4c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-74\" (UID: \"77f46a4f56ad8ecad6a9658cd93ab4c1\") " pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:32.558674 kubelet[2798]: I1112 20:49:32.558200 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77f46a4f56ad8ecad6a9658cd93ab4c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-74\" (UID: \"77f46a4f56ad8ecad6a9658cd93ab4c1\") " pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:32.567086 kubelet[2798]: E1112 20:49:32.566713 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": dial tcp 172.31.17.74:6443: connect: connection refused" interval="400ms" Nov 12 20:49:32.608164 kubelet[2798]: I1112 20:49:32.608136 2798 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-74" Nov 12 20:49:32.611654 kubelet[2798]: E1112 20:49:32.611454 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.74:6443/api/v1/nodes\": dial tcp 172.31.17.74:6443: connect: connection refused" node="ip-172-31-17-74" Nov 12 20:49:32.642997 systemd[1]: Created slice kubepods-burstable-pod77f46a4f56ad8ecad6a9658cd93ab4c1.slice - libcontainer container kubepods-burstable-pod77f46a4f56ad8ecad6a9658cd93ab4c1.slice. Nov 12 20:49:32.659211 kubelet[2798]: I1112 20:49:32.658874 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:32.659211 kubelet[2798]: I1112 20:49:32.658933 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:32.659211 kubelet[2798]: I1112 20:49:32.658963 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:32.659211 kubelet[2798]: I1112 20:49:32.659018 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:32.659211 kubelet[2798]: I1112 20:49:32.659041 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:32.659824 kubelet[2798]: I1112 20:49:32.659064 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32e5c39b7a74d68811506a63e725215e-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-74\" (UID: \"32e5c39b7a74d68811506a63e725215e\") " pod="kube-system/kube-scheduler-ip-172-31-17-74" Nov 12 20:49:32.687288 containerd[1977]: time="2024-11-12T20:49:32.686264107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-74,Uid:77f46a4f56ad8ecad6a9658cd93ab4c1,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:32.691037 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2823) Nov 12 20:49:32.702934 systemd[1]: Created slice kubepods-burstable-pod103716a040614dc48640d0c4c873e078.slice - libcontainer container kubepods-burstable-pod103716a040614dc48640d0c4c873e078.slice. Nov 12 20:49:32.732910 systemd[1]: Created slice kubepods-burstable-pod32e5c39b7a74d68811506a63e725215e.slice - libcontainer container kubepods-burstable-pod32e5c39b7a74d68811506a63e725215e.slice. Nov 12 20:49:32.825893 kubelet[2798]: I1112 20:49:32.825044 2798 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-74" Nov 12 20:49:32.827803 kubelet[2798]: E1112 20:49:32.827641 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.74:6443/api/v1/nodes\": dial tcp 172.31.17.74:6443: connect: connection refused" node="ip-172-31-17-74" Nov 12 20:49:32.970195 kubelet[2798]: E1112 20:49:32.969808 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": dial tcp 172.31.17.74:6443: connect: connection refused" interval="800ms" Nov 12 20:49:33.033905 containerd[1977]: time="2024-11-12T20:49:33.026544248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-74,Uid:103716a040614dc48640d0c4c873e078,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:33.047424 containerd[1977]: time="2024-11-12T20:49:33.047234238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-74,Uid:32e5c39b7a74d68811506a63e725215e,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:33.236248 kubelet[2798]: I1112 20:49:33.236146 2798 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-74" Nov 12 20:49:33.236623 kubelet[2798]: E1112 20:49:33.236586 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.74:6443/api/v1/nodes\": dial tcp 172.31.17.74:6443: connect: connection refused" node="ip-172-31-17-74" Nov 12 20:49:33.322598 kubelet[2798]: W1112 20:49:33.322533 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-74&limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:33.322721 kubelet[2798]: E1112 20:49:33.322609 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-74&limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:33.358853 kubelet[2798]: W1112 20:49:33.358779 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:33.359055 kubelet[2798]: E1112 20:49:33.358862 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:33.390551 kubelet[2798]: W1112 20:49:33.390206 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:33.390551 kubelet[2798]: E1112 20:49:33.390516 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:33.395145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806561381.mount: Deactivated successfully. Nov 12 20:49:33.420840 containerd[1977]: time="2024-11-12T20:49:33.420785799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:33.422504 containerd[1977]: time="2024-11-12T20:49:33.422449587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:49:33.425155 containerd[1977]: time="2024-11-12T20:49:33.425109320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:33.426943 containerd[1977]: time="2024-11-12T20:49:33.426897716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:33.428313 containerd[1977]: time="2024-11-12T20:49:33.428249398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:49:33.432025 containerd[1977]: time="2024-11-12T20:49:33.431511887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:49:33.432025 containerd[1977]: time="2024-11-12T20:49:33.431568722Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:33.449650 containerd[1977]: time="2024-11-12T20:49:33.447856416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:33.453111 containerd[1977]: time="2024-11-12T20:49:33.452970523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 400.423628ms" Nov 12 20:49:33.470458 containerd[1977]: time="2024-11-12T20:49:33.468131593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.640165ms" Nov 12 20:49:33.479269 containerd[1977]: time="2024-11-12T20:49:33.479040785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.379236ms" Nov 12 20:49:33.771155 kubelet[2798]: E1112 20:49:33.771096 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": dial tcp 172.31.17.74:6443: connect: connection refused" interval="1.6s" Nov 12 20:49:33.875936 kubelet[2798]: W1112 20:49:33.865380 2798 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.74:6443: connect: connection refused Nov 12 20:49:33.875936 kubelet[2798]: E1112 20:49:33.865535 2798 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:34.048225 kubelet[2798]: I1112 20:49:34.045671 2798 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-74" Nov 12 20:49:34.048225 kubelet[2798]: E1112 20:49:34.046037 2798 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.74:6443/api/v1/nodes\": dial tcp 172.31.17.74:6443: connect: connection refused" node="ip-172-31-17-74" Nov 12 20:49:34.121163 containerd[1977]: time="2024-11-12T20:49:34.121045003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:34.121163 containerd[1977]: time="2024-11-12T20:49:34.121113266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:34.121163 containerd[1977]: time="2024-11-12T20:49:34.121135756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:34.122387 containerd[1977]: time="2024-11-12T20:49:34.121971024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:34.123921 containerd[1977]: time="2024-11-12T20:49:34.123607307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:34.123921 containerd[1977]: time="2024-11-12T20:49:34.123677715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:34.123921 containerd[1977]: time="2024-11-12T20:49:34.123701939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:34.123921 containerd[1977]: time="2024-11-12T20:49:34.123807633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:34.134921 containerd[1977]: time="2024-11-12T20:49:34.133357325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:34.134921 containerd[1977]: time="2024-11-12T20:49:34.133462776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:34.134921 containerd[1977]: time="2024-11-12T20:49:34.133484777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:34.134921 containerd[1977]: time="2024-11-12T20:49:34.133643904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:34.174581 systemd[1]: Started cri-containerd-2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba.scope - libcontainer container 2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba. Nov 12 20:49:34.178192 systemd[1]: Started cri-containerd-eaf4e04d8ae2ef79d2d839d6e8c8bcd1da2ee19c98f98a01f82d75aedb3ed18a.scope - libcontainer container eaf4e04d8ae2ef79d2d839d6e8c8bcd1da2ee19c98f98a01f82d75aedb3ed18a. Nov 12 20:49:34.194143 systemd[1]: Started cri-containerd-89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26.scope - libcontainer container 89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26. Nov 12 20:49:34.293653 containerd[1977]: time="2024-11-12T20:49:34.292244634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-74,Uid:103716a040614dc48640d0c4c873e078,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba\"" Nov 12 20:49:34.310595 containerd[1977]: time="2024-11-12T20:49:34.310358310Z" level=info msg="CreateContainer within sandbox \"2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:49:34.321095 containerd[1977]: time="2024-11-12T20:49:34.321052123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-74,Uid:77f46a4f56ad8ecad6a9658cd93ab4c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaf4e04d8ae2ef79d2d839d6e8c8bcd1da2ee19c98f98a01f82d75aedb3ed18a\"" Nov 12 20:49:34.327742 containerd[1977]: time="2024-11-12T20:49:34.327697215Z" level=info msg="CreateContainer within sandbox \"eaf4e04d8ae2ef79d2d839d6e8c8bcd1da2ee19c98f98a01f82d75aedb3ed18a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:49:34.328113 containerd[1977]: time="2024-11-12T20:49:34.328084012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-74,Uid:32e5c39b7a74d68811506a63e725215e,Namespace:kube-system,Attempt:0,} returns sandbox id \"89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26\"" Nov 12 20:49:34.335628 containerd[1977]: time="2024-11-12T20:49:34.335531501Z" level=info msg="CreateContainer within sandbox \"89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:49:34.353336 containerd[1977]: time="2024-11-12T20:49:34.353135848Z" level=info msg="CreateContainer within sandbox \"2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4\"" Nov 12 20:49:34.354305 containerd[1977]: time="2024-11-12T20:49:34.354269959Z" level=info msg="StartContainer for \"970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4\"" Nov 12 20:49:34.363731 containerd[1977]: time="2024-11-12T20:49:34.363676917Z" level=info msg="CreateContainer within sandbox \"eaf4e04d8ae2ef79d2d839d6e8c8bcd1da2ee19c98f98a01f82d75aedb3ed18a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c4b6bbbc6247c35f0e4354c904a9ed0b74fe1b1c41ac6b327356471b3b5e0cd\"" Nov 12 20:49:34.369966 containerd[1977]: time="2024-11-12T20:49:34.369165314Z" level=info msg="StartContainer for \"1c4b6bbbc6247c35f0e4354c904a9ed0b74fe1b1c41ac6b327356471b3b5e0cd\"" Nov 12 20:49:34.398428 containerd[1977]: time="2024-11-12T20:49:34.398357229Z" level=info msg="CreateContainer within sandbox \"89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7\"" Nov 12 20:49:34.399001 containerd[1977]: time="2024-11-12T20:49:34.398962974Z" level=info msg="StartContainer for \"14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7\"" Nov 12 20:49:34.408515 kubelet[2798]: E1112 20:49:34.408337 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.74:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:49:34.438170 systemd[1]: Started cri-containerd-970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4.scope - libcontainer container 970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4. Nov 12 20:49:34.457226 systemd[1]: Started cri-containerd-1c4b6bbbc6247c35f0e4354c904a9ed0b74fe1b1c41ac6b327356471b3b5e0cd.scope - libcontainer container 1c4b6bbbc6247c35f0e4354c904a9ed0b74fe1b1c41ac6b327356471b3b5e0cd. Nov 12 20:49:34.508070 systemd[1]: Started cri-containerd-14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7.scope - libcontainer container 14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7. Nov 12 20:49:34.641627 containerd[1977]: time="2024-11-12T20:49:34.641352328Z" level=info msg="StartContainer for \"970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4\" returns successfully" Nov 12 20:49:34.722178 containerd[1977]: time="2024-11-12T20:49:34.722113169Z" level=info msg="StartContainer for \"1c4b6bbbc6247c35f0e4354c904a9ed0b74fe1b1c41ac6b327356471b3b5e0cd\" returns successfully" Nov 12 20:49:34.737446 containerd[1977]: time="2024-11-12T20:49:34.737313315Z" level=info msg="StartContainer for \"14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7\" returns successfully" Nov 12 20:49:35.372521 kubelet[2798]: E1112 20:49:35.372104 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": dial tcp 172.31.17.74:6443: connect: connection refused" interval="3.2s" Nov 12 20:49:35.649037 kubelet[2798]: I1112 20:49:35.648613 2798 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-74" Nov 12 20:49:38.595353 kubelet[2798]: E1112 20:49:38.595112 2798 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-74\" not found" node="ip-172-31-17-74" Nov 12 20:49:38.685252 kubelet[2798]: I1112 20:49:38.685187 2798 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-74" Nov 12 20:49:39.271652 kubelet[2798]: I1112 20:49:39.271603 2798 apiserver.go:52] "Watching apiserver" Nov 12 20:49:39.350565 kubelet[2798]: I1112 20:49:39.350498 2798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:49:41.539385 systemd[1]: Reloading requested from client PID 3251 ('systemctl') (unit session-7.scope)... Nov 12 20:49:41.539406 systemd[1]: Reloading... Nov 12 20:49:41.813923 zram_generator::config[3292]: No configuration found. Nov 12 20:49:42.049731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:49:42.227340 systemd[1]: Reloading finished in 687 ms. Nov 12 20:49:42.348981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:42.402568 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:49:42.404290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:42.432065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:43.009215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:43.018436 (kubelet)[3348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:49:43.238941 kubelet[3348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:43.238941 kubelet[3348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:49:43.238941 kubelet[3348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:43.238941 kubelet[3348]: I1112 20:49:43.238335 3348 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:49:43.256148 kubelet[3348]: I1112 20:49:43.256109 3348 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:49:43.256582 kubelet[3348]: I1112 20:49:43.256451 3348 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:49:43.257272 kubelet[3348]: I1112 20:49:43.257254 3348 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:49:43.261092 kubelet[3348]: I1112 20:49:43.260406 3348 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:49:43.271249 kubelet[3348]: I1112 20:49:43.271118 3348 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:49:43.291525 kubelet[3348]: E1112 20:49:43.291446 3348 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:49:43.292166 kubelet[3348]: I1112 20:49:43.291493 3348 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:49:43.295990 kubelet[3348]: I1112 20:49:43.295839 3348 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:49:43.296410 kubelet[3348]: I1112 20:49:43.296274 3348 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:49:43.297002 kubelet[3348]: I1112 20:49:43.296934 3348 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:49:43.298134 kubelet[3348]: I1112 20:49:43.296979 3348 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:49:43.298134 kubelet[3348]: I1112 20:49:43.297496 3348 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:49:43.298134 kubelet[3348]: I1112 20:49:43.297529 3348 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:49:43.298134 kubelet[3348]: I1112 20:49:43.297578 3348 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:43.299149 kubelet[3348]: I1112 20:49:43.298866 3348 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:49:43.299149 kubelet[3348]: I1112 20:49:43.298948 3348 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:49:43.299149 kubelet[3348]: I1112 20:49:43.298990 3348 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:49:43.299149 kubelet[3348]: I1112 20:49:43.299013 3348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:49:43.304040 kubelet[3348]: I1112 20:49:43.302292 3348 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:49:43.304040 kubelet[3348]: I1112 20:49:43.303118 3348 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:49:43.304040 kubelet[3348]: I1112 20:49:43.303615 3348 server.go:1269] "Started kubelet" Nov 12 20:49:43.314778 kubelet[3348]: I1112 20:49:43.314747 3348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:49:43.317719 kubelet[3348]: I1112 20:49:43.316076 3348 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:49:43.319550 kubelet[3348]: I1112 20:49:43.319400 3348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:49:43.322941 kubelet[3348]: I1112 20:49:43.322912 3348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:49:43.325798 kubelet[3348]: I1112 20:49:43.325769 3348 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:49:43.340715 kubelet[3348]: E1112 20:49:43.331185 3348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-74\" not found" Nov 12 20:49:43.341047 kubelet[3348]: I1112 20:49:43.331678 3348 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:49:43.341528 kubelet[3348]: I1112 20:49:43.336792 3348 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:49:43.342702 kubelet[3348]: I1112 20:49:43.336972 3348 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:49:43.352653 kubelet[3348]: I1112 20:49:43.352604 3348 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:49:43.355703 kubelet[3348]: I1112 20:49:43.355674 3348 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:49:43.356057 kubelet[3348]: I1112 20:49:43.356030 3348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:49:43.383705 kubelet[3348]: I1112 20:49:43.383598 3348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:49:43.395624 kubelet[3348]: E1112 20:49:43.393489 3348 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:49:43.395624 kubelet[3348]: I1112 20:49:43.393686 3348 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:49:43.401267 kubelet[3348]: I1112 20:49:43.400551 3348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:49:43.401267 kubelet[3348]: I1112 20:49:43.400600 3348 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:49:43.401267 kubelet[3348]: I1112 20:49:43.400624 3348 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:49:43.401267 kubelet[3348]: E1112 20:49:43.400677 3348 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:49:43.493152 kubelet[3348]: I1112 20:49:43.493122 3348 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:49:43.493152 kubelet[3348]: I1112 20:49:43.493142 3348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:49:43.493152 kubelet[3348]: I1112 20:49:43.493163 3348 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:43.494084 kubelet[3348]: I1112 20:49:43.493384 3348 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:49:43.494084 kubelet[3348]: I1112 20:49:43.493398 3348 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:49:43.494084 kubelet[3348]: I1112 20:49:43.493422 3348 policy_none.go:49] "None policy: Start" Nov 12 20:49:43.496056 kubelet[3348]: I1112 20:49:43.496029 3348 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:49:43.496217 kubelet[3348]: I1112 20:49:43.496075 3348 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:49:43.496969 kubelet[3348]: I1112 20:49:43.496351 3348 state_mem.go:75] "Updated machine memory state" Nov 12 20:49:43.501160 kubelet[3348]: E1112 20:49:43.501101 3348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:49:43.508953 kubelet[3348]: I1112 20:49:43.508514 3348 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:49:43.509088 kubelet[3348]: I1112 20:49:43.508959 3348 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:49:43.509088 kubelet[3348]: I1112 20:49:43.508976 3348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:49:43.510767 kubelet[3348]: I1112 20:49:43.510352 3348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:49:43.630355 kubelet[3348]: I1112 20:49:43.630323 3348 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-74" Nov 12 20:49:43.640130 kubelet[3348]: I1112 20:49:43.640093 3348 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-74" Nov 12 20:49:43.640283 kubelet[3348]: I1112 20:49:43.640194 3348 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-74" Nov 12 20:49:43.715865 kubelet[3348]: E1112 20:49:43.715815 3348 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-74\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:43.716093 kubelet[3348]: E1112 20:49:43.716075 3348 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-74\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:43.754466 kubelet[3348]: I1112 20:49:43.754138 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32e5c39b7a74d68811506a63e725215e-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-74\" (UID: \"32e5c39b7a74d68811506a63e725215e\") " pod="kube-system/kube-scheduler-ip-172-31-17-74" Nov 12 20:49:43.754466 kubelet[3348]: I1112 20:49:43.754199 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77f46a4f56ad8ecad6a9658cd93ab4c1-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-74\" (UID: \"77f46a4f56ad8ecad6a9658cd93ab4c1\") " pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:43.754466 kubelet[3348]: I1112 20:49:43.754227 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77f46a4f56ad8ecad6a9658cd93ab4c1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-74\" (UID: \"77f46a4f56ad8ecad6a9658cd93ab4c1\") " pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:43.754466 kubelet[3348]: I1112 20:49:43.754255 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77f46a4f56ad8ecad6a9658cd93ab4c1-ca-certs\") pod \"kube-apiserver-ip-172-31-17-74\" (UID: \"77f46a4f56ad8ecad6a9658cd93ab4c1\") " pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:43.754466 kubelet[3348]: I1112 20:49:43.754279 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:43.754941 kubelet[3348]: I1112 20:49:43.754304 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:43.754941 kubelet[3348]: I1112 20:49:43.754327 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:43.754941 kubelet[3348]: I1112 20:49:43.754350 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:43.754941 kubelet[3348]: I1112 20:49:43.754375 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/103716a040614dc48640d0c4c873e078-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-74\" (UID: \"103716a040614dc48640d0c4c873e078\") " pod="kube-system/kube-controller-manager-ip-172-31-17-74" Nov 12 20:49:44.313941 kubelet[3348]: I1112 20:49:44.312374 3348 apiserver.go:52] "Watching apiserver" Nov 12 20:49:44.341269 kubelet[3348]: I1112 20:49:44.341185 3348 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:49:44.493055 kubelet[3348]: E1112 20:49:44.492979 3348 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-74\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-74" Nov 12 20:49:44.569522 kubelet[3348]: I1112 20:49:44.569168 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-74" podStartSLOduration=3.569147302 podStartE2EDuration="3.569147302s" podCreationTimestamp="2024-11-12 20:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:44.56328324 +0000 UTC m=+1.459003672" watchObservedRunningTime="2024-11-12 20:49:44.569147302 +0000 UTC m=+1.464867736" Nov 12 20:49:44.704328 kubelet[3348]: I1112 20:49:44.704064 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-74" podStartSLOduration=2.704042824 podStartE2EDuration="2.704042824s" podCreationTimestamp="2024-11-12 20:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:44.582864356 +0000 UTC m=+1.478584786" watchObservedRunningTime="2024-11-12 20:49:44.704042824 +0000 UTC m=+1.599763257" Nov 12 20:49:46.399129 kubelet[3348]: I1112 20:49:46.398768 3348 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:49:46.400400 containerd[1977]: time="2024-11-12T20:49:46.399948770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:49:46.400766 kubelet[3348]: I1112 20:49:46.400303 3348 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:49:47.110648 kubelet[3348]: I1112 20:49:47.110100 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-74" podStartSLOduration=4.110077083 podStartE2EDuration="4.110077083s" podCreationTimestamp="2024-11-12 20:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:44.704728723 +0000 UTC m=+1.600449155" watchObservedRunningTime="2024-11-12 20:49:47.110077083 +0000 UTC m=+4.005797515" Nov 12 20:49:47.173905 systemd[1]: Created slice kubepods-besteffort-pod02354a83_039c_4295_8779_c5a7abcd8b7f.slice - libcontainer container kubepods-besteffort-pod02354a83_039c_4295_8779_c5a7abcd8b7f.slice. Nov 12 20:49:47.189579 kubelet[3348]: I1112 20:49:47.189362 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02354a83-039c-4295-8779-c5a7abcd8b7f-kube-proxy\") pod \"kube-proxy-dx9cv\" (UID: \"02354a83-039c-4295-8779-c5a7abcd8b7f\") " pod="kube-system/kube-proxy-dx9cv" Nov 12 20:49:47.189579 kubelet[3348]: I1112 20:49:47.189419 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02354a83-039c-4295-8779-c5a7abcd8b7f-xtables-lock\") pod \"kube-proxy-dx9cv\" (UID: \"02354a83-039c-4295-8779-c5a7abcd8b7f\") " pod="kube-system/kube-proxy-dx9cv" Nov 12 20:49:47.189579 kubelet[3348]: I1112 20:49:47.189447 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757qh\" (UniqueName: \"kubernetes.io/projected/02354a83-039c-4295-8779-c5a7abcd8b7f-kube-api-access-757qh\") pod \"kube-proxy-dx9cv\" (UID: \"02354a83-039c-4295-8779-c5a7abcd8b7f\") " pod="kube-system/kube-proxy-dx9cv" Nov 12 20:49:47.189579 kubelet[3348]: I1112 20:49:47.189476 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02354a83-039c-4295-8779-c5a7abcd8b7f-lib-modules\") pod \"kube-proxy-dx9cv\" (UID: \"02354a83-039c-4295-8779-c5a7abcd8b7f\") " pod="kube-system/kube-proxy-dx9cv" Nov 12 20:49:47.566011 systemd[1]: Created slice kubepods-besteffort-poda8660631_8c71_4cc1_b87c_8ad8d6068ca1.slice - libcontainer container kubepods-besteffort-poda8660631_8c71_4cc1_b87c_8ad8d6068ca1.slice. Nov 12 20:49:47.593399 kubelet[3348]: I1112 20:49:47.593277 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a8660631-8c71-4cc1-b87c-8ad8d6068ca1-var-lib-calico\") pod \"tigera-operator-f8bc97d4c-pjwgf\" (UID: \"a8660631-8c71-4cc1-b87c-8ad8d6068ca1\") " pod="tigera-operator/tigera-operator-f8bc97d4c-pjwgf" Nov 12 20:49:47.593399 kubelet[3348]: I1112 20:49:47.593332 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psbv\" (UniqueName: \"kubernetes.io/projected/a8660631-8c71-4cc1-b87c-8ad8d6068ca1-kube-api-access-9psbv\") pod \"tigera-operator-f8bc97d4c-pjwgf\" (UID: \"a8660631-8c71-4cc1-b87c-8ad8d6068ca1\") " pod="tigera-operator/tigera-operator-f8bc97d4c-pjwgf" Nov 12 20:49:47.597465 kubelet[3348]: W1112 20:49:47.597372 3348 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-17-74" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-17-74' and this object Nov 12 20:49:47.597465 kubelet[3348]: E1112 20:49:47.597427 3348 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ip-172-31-17-74\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-17-74' and this object" logger="UnhandledError" Nov 12 20:49:47.598110 kubelet[3348]: W1112 20:49:47.598016 3348 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-74" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-17-74' and this object Nov 12 20:49:47.598110 kubelet[3348]: E1112 20:49:47.598053 3348 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-74\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-17-74' and this object" logger="UnhandledError" Nov 12 20:49:47.786527 containerd[1977]: time="2024-11-12T20:49:47.786476678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dx9cv,Uid:02354a83-039c-4295-8779-c5a7abcd8b7f,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:47.856651 containerd[1977]: time="2024-11-12T20:49:47.849823144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:47.856651 containerd[1977]: time="2024-11-12T20:49:47.849918730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:47.856651 containerd[1977]: time="2024-11-12T20:49:47.849940302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:47.856651 containerd[1977]: time="2024-11-12T20:49:47.850047705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:47.911158 systemd[1]: Started cri-containerd-3a2c6ad57d4ffa4580d32b47b0ed7577f8fe98fe572c50392ed0154673a45622.scope - libcontainer container 3a2c6ad57d4ffa4580d32b47b0ed7577f8fe98fe572c50392ed0154673a45622. Nov 12 20:49:48.068051 containerd[1977]: time="2024-11-12T20:49:48.067993707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dx9cv,Uid:02354a83-039c-4295-8779-c5a7abcd8b7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a2c6ad57d4ffa4580d32b47b0ed7577f8fe98fe572c50392ed0154673a45622\"" Nov 12 20:49:48.072583 containerd[1977]: time="2024-11-12T20:49:48.072538861Z" level=info msg="CreateContainer within sandbox \"3a2c6ad57d4ffa4580d32b47b0ed7577f8fe98fe572c50392ed0154673a45622\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:49:48.117998 containerd[1977]: time="2024-11-12T20:49:48.115786679Z" level=info msg="CreateContainer within sandbox \"3a2c6ad57d4ffa4580d32b47b0ed7577f8fe98fe572c50392ed0154673a45622\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"083397164d6465a432afdbe8a4a3f1f297fa3449c55fdd81cdfa1ebf303831c4\"" Nov 12 20:49:48.119474 containerd[1977]: time="2024-11-12T20:49:48.119426636Z" level=info msg="StartContainer for \"083397164d6465a432afdbe8a4a3f1f297fa3449c55fdd81cdfa1ebf303831c4\"" Nov 12 20:49:48.186180 systemd[1]: Started cri-containerd-083397164d6465a432afdbe8a4a3f1f297fa3449c55fdd81cdfa1ebf303831c4.scope - libcontainer container 083397164d6465a432afdbe8a4a3f1f297fa3449c55fdd81cdfa1ebf303831c4. Nov 12 20:49:48.248012 containerd[1977]: time="2024-11-12T20:49:48.247943186Z" level=info msg="StartContainer for \"083397164d6465a432afdbe8a4a3f1f297fa3449c55fdd81cdfa1ebf303831c4\" returns successfully" Nov 12 20:49:48.498773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066685395.mount: Deactivated successfully. Nov 12 20:49:48.744774 kubelet[3348]: E1112 20:49:48.744388 3348 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 12 20:49:48.744774 kubelet[3348]: E1112 20:49:48.744442 3348 projected.go:194] Error preparing data for projected volume kube-api-access-9psbv for pod tigera-operator/tigera-operator-f8bc97d4c-pjwgf: failed to sync configmap cache: timed out waiting for the condition Nov 12 20:49:48.744774 kubelet[3348]: E1112 20:49:48.744525 3348 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8660631-8c71-4cc1-b87c-8ad8d6068ca1-kube-api-access-9psbv podName:a8660631-8c71-4cc1-b87c-8ad8d6068ca1 nodeName:}" failed. No retries permitted until 2024-11-12 20:49:49.244499085 +0000 UTC m=+6.140219516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9psbv" (UniqueName: "kubernetes.io/projected/a8660631-8c71-4cc1-b87c-8ad8d6068ca1-kube-api-access-9psbv") pod "tigera-operator-f8bc97d4c-pjwgf" (UID: "a8660631-8c71-4cc1-b87c-8ad8d6068ca1") : failed to sync configmap cache: timed out waiting for the condition Nov 12 20:49:49.226611 kubelet[3348]: I1112 20:49:49.226529 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dx9cv" podStartSLOduration=2.226507364 podStartE2EDuration="2.226507364s" podCreationTimestamp="2024-11-12 20:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:48.497329858 +0000 UTC m=+5.393050287" watchObservedRunningTime="2024-11-12 20:49:49.226507364 +0000 UTC m=+6.122227796" Nov 12 20:49:49.377997 containerd[1977]: time="2024-11-12T20:49:49.377940696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-pjwgf,Uid:a8660631-8c71-4cc1-b87c-8ad8d6068ca1,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:49:49.458503 containerd[1977]: time="2024-11-12T20:49:49.458073275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:49.458993 containerd[1977]: time="2024-11-12T20:49:49.458386716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:49.458993 containerd[1977]: time="2024-11-12T20:49:49.458612372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:49.461980 containerd[1977]: time="2024-11-12T20:49:49.460620061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:49.510204 systemd[1]: Started cri-containerd-f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d.scope - libcontainer container f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d. Nov 12 20:49:49.578638 containerd[1977]: time="2024-11-12T20:49:49.578592253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-pjwgf,Uid:a8660631-8c71-4cc1-b87c-8ad8d6068ca1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d\"" Nov 12 20:49:49.581250 containerd[1977]: time="2024-11-12T20:49:49.580982468Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:49:51.234744 sudo[2289]: pam_unix(sudo:session): session closed for user root Nov 12 20:49:51.264232 sshd[2286]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:51.275294 systemd[1]: sshd@6-172.31.17.74:22-139.178.89.65:52236.service: Deactivated successfully. Nov 12 20:49:51.286630 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:49:51.286895 systemd[1]: session-7.scope: Consumed 4.354s CPU time, 145.5M memory peak, 0B memory swap peak. Nov 12 20:49:51.292128 systemd-logind[1946]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:49:51.304432 systemd-logind[1946]: Removed session 7. Nov 12 20:49:51.632586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704182325.mount: Deactivated successfully. Nov 12 20:49:53.066320 containerd[1977]: time="2024-11-12T20:49:53.066255728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:53.069640 containerd[1977]: time="2024-11-12T20:49:53.069590780Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763363" Nov 12 20:49:53.071171 containerd[1977]: time="2024-11-12T20:49:53.071101177Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:53.077926 containerd[1977]: time="2024-11-12T20:49:53.077093338Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:53.078905 containerd[1977]: time="2024-11-12T20:49:53.078175326Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 3.49714973s" Nov 12 20:49:53.078905 containerd[1977]: time="2024-11-12T20:49:53.078210334Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:49:53.232387 containerd[1977]: time="2024-11-12T20:49:53.232341542Z" level=info msg="CreateContainer within sandbox \"f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:49:53.296663 containerd[1977]: time="2024-11-12T20:49:53.296601580Z" level=info msg="CreateContainer within sandbox \"f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526\"" Nov 12 20:49:53.302358 containerd[1977]: time="2024-11-12T20:49:53.302319645Z" level=info msg="StartContainer for \"384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526\"" Nov 12 20:49:53.436750 systemd[1]: Started cri-containerd-384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526.scope - libcontainer container 384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526. Nov 12 20:49:53.694462 containerd[1977]: time="2024-11-12T20:49:53.694047310Z" level=info msg="StartContainer for \"384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526\" returns successfully" Nov 12 20:49:57.489333 kubelet[3348]: I1112 20:49:57.488773 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-f8bc97d4c-pjwgf" podStartSLOduration=6.979299815 podStartE2EDuration="10.488729454s" podCreationTimestamp="2024-11-12 20:49:47 +0000 UTC" firstStartedPulling="2024-11-12 20:49:49.580274443 +0000 UTC m=+6.475994859" lastFinishedPulling="2024-11-12 20:49:53.089704073 +0000 UTC m=+9.985424498" observedRunningTime="2024-11-12 20:49:54.688179587 +0000 UTC m=+11.583900022" watchObservedRunningTime="2024-11-12 20:49:57.488729454 +0000 UTC m=+14.384449886" Nov 12 20:49:57.509607 systemd[1]: Created slice kubepods-besteffort-podbbdfce8d_b954_4f13_8a2a_92898704adb8.slice - libcontainer container kubepods-besteffort-podbbdfce8d_b954_4f13_8a2a_92898704adb8.slice. Nov 12 20:49:57.649634 systemd[1]: Created slice kubepods-besteffort-podfe93d91a_2352_41b3_806b_20d080484120.slice - libcontainer container kubepods-besteffort-podfe93d91a_2352_41b3_806b_20d080484120.slice. Nov 12 20:49:57.661993 kubelet[3348]: I1112 20:49:57.661946 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bbdfce8d-b954-4f13-8a2a-92898704adb8-typha-certs\") pod \"calico-typha-7866565fd4-5pwnl\" (UID: \"bbdfce8d-b954-4f13-8a2a-92898704adb8\") " pod="calico-system/calico-typha-7866565fd4-5pwnl" Nov 12 20:49:57.662178 kubelet[3348]: I1112 20:49:57.662020 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72xp\" (UniqueName: \"kubernetes.io/projected/bbdfce8d-b954-4f13-8a2a-92898704adb8-kube-api-access-c72xp\") pod \"calico-typha-7866565fd4-5pwnl\" (UID: \"bbdfce8d-b954-4f13-8a2a-92898704adb8\") " pod="calico-system/calico-typha-7866565fd4-5pwnl" Nov 12 20:49:57.662178 kubelet[3348]: I1112 20:49:57.662047 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbdfce8d-b954-4f13-8a2a-92898704adb8-tigera-ca-bundle\") pod \"calico-typha-7866565fd4-5pwnl\" (UID: \"bbdfce8d-b954-4f13-8a2a-92898704adb8\") " pod="calico-system/calico-typha-7866565fd4-5pwnl" Nov 12 20:49:57.764575 kubelet[3348]: I1112 20:49:57.763156 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-cni-log-dir\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764575 kubelet[3348]: I1112 20:49:57.763226 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe93d91a-2352-41b3-806b-20d080484120-tigera-ca-bundle\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764575 kubelet[3348]: I1112 20:49:57.763252 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-flexvol-driver-host\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764575 kubelet[3348]: I1112 20:49:57.763291 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-var-lib-calico\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764575 kubelet[3348]: I1112 20:49:57.763315 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbjh8\" (UniqueName: \"kubernetes.io/projected/fe93d91a-2352-41b3-806b-20d080484120-kube-api-access-pbjh8\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764914 kubelet[3348]: I1112 20:49:57.763341 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-lib-modules\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764914 kubelet[3348]: I1112 20:49:57.763363 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-cni-net-dir\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764914 kubelet[3348]: I1112 20:49:57.763383 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-xtables-lock\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764914 kubelet[3348]: I1112 20:49:57.763410 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-policysync\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.764914 kubelet[3348]: I1112 20:49:57.763431 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-cni-bin-dir\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.765147 kubelet[3348]: I1112 20:49:57.763454 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fe93d91a-2352-41b3-806b-20d080484120-node-certs\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.765147 kubelet[3348]: I1112 20:49:57.763476 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fe93d91a-2352-41b3-806b-20d080484120-var-run-calico\") pod \"calico-node-27qdb\" (UID: \"fe93d91a-2352-41b3-806b-20d080484120\") " pod="calico-system/calico-node-27qdb" Nov 12 20:49:57.815700 containerd[1977]: time="2024-11-12T20:49:57.815655972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7866565fd4-5pwnl,Uid:bbdfce8d-b954-4f13-8a2a-92898704adb8,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:57.846937 kubelet[3348]: E1112 20:49:57.839630 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:49:57.874182 kubelet[3348]: E1112 20:49:57.872846 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.874182 kubelet[3348]: W1112 20:49:57.872878 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.874182 kubelet[3348]: E1112 20:49:57.872935 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.894907 containerd[1977]: time="2024-11-12T20:49:57.883116207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:57.894907 containerd[1977]: time="2024-11-12T20:49:57.883208115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:57.894907 containerd[1977]: time="2024-11-12T20:49:57.883232927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:57.894907 containerd[1977]: time="2024-11-12T20:49:57.883369418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:57.909797 kubelet[3348]: E1112 20:49:57.905398 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.909797 kubelet[3348]: W1112 20:49:57.905430 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.909797 kubelet[3348]: E1112 20:49:57.905555 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.914562 kubelet[3348]: E1112 20:49:57.913957 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.914562 kubelet[3348]: W1112 20:49:57.914178 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.914562 kubelet[3348]: E1112 20:49:57.914209 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.978450 kubelet[3348]: E1112 20:49:57.975802 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.978450 kubelet[3348]: W1112 20:49:57.975835 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.981153 kubelet[3348]: E1112 20:49:57.978494 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.981153 kubelet[3348]: I1112 20:49:57.978561 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pg4\" (UniqueName: \"kubernetes.io/projected/ced66ac8-f918-4e87-97a3-aafcae5e3866-kube-api-access-p7pg4\") pod \"csi-node-driver-fgdrq\" (UID: \"ced66ac8-f918-4e87-97a3-aafcae5e3866\") " pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:49:57.981983 kubelet[3348]: E1112 20:49:57.981938 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.982056 kubelet[3348]: W1112 20:49:57.981983 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.982056 kubelet[3348]: E1112 20:49:57.982022 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.982144 kubelet[3348]: I1112 20:49:57.982059 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ced66ac8-f918-4e87-97a3-aafcae5e3866-varrun\") pod \"csi-node-driver-fgdrq\" (UID: \"ced66ac8-f918-4e87-97a3-aafcae5e3866\") " pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:49:57.985624 kubelet[3348]: E1112 20:49:57.984921 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.985624 kubelet[3348]: W1112 20:49:57.985030 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.985624 kubelet[3348]: E1112 20:49:57.985058 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.985624 kubelet[3348]: I1112 20:49:57.985182 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ced66ac8-f918-4e87-97a3-aafcae5e3866-socket-dir\") pod \"csi-node-driver-fgdrq\" (UID: \"ced66ac8-f918-4e87-97a3-aafcae5e3866\") " pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:49:57.988352 kubelet[3348]: E1112 20:49:57.987693 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.988352 kubelet[3348]: W1112 20:49:57.987817 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.988431 containerd[1977]: time="2024-11-12T20:49:57.987982367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27qdb,Uid:fe93d91a-2352-41b3-806b-20d080484120,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:57.993914 kubelet[3348]: E1112 20:49:57.993142 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.993914 kubelet[3348]: I1112 20:49:57.993213 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ced66ac8-f918-4e87-97a3-aafcae5e3866-kubelet-dir\") pod \"csi-node-driver-fgdrq\" (UID: \"ced66ac8-f918-4e87-97a3-aafcae5e3866\") " pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:49:57.993914 kubelet[3348]: E1112 20:49:57.993342 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.993914 kubelet[3348]: W1112 20:49:57.993355 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.993914 kubelet[3348]: E1112 20:49:57.993378 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.997905 kubelet[3348]: E1112 20:49:57.994625 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.997905 kubelet[3348]: W1112 20:49:57.994660 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.997905 kubelet[3348]: E1112 20:49:57.994728 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:57.997905 kubelet[3348]: E1112 20:49:57.995316 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:57.997905 kubelet[3348]: W1112 20:49:57.995329 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:57.997905 kubelet[3348]: E1112 20:49:57.997807 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.006116 kubelet[3348]: E1112 20:49:57.998945 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.006116 kubelet[3348]: W1112 20:49:57.998966 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.006116 kubelet[3348]: E1112 20:49:58.005092 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.006116 kubelet[3348]: E1112 20:49:58.005423 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.006116 kubelet[3348]: W1112 20:49:58.005441 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.006116 kubelet[3348]: E1112 20:49:58.005953 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.006116 kubelet[3348]: I1112 20:49:58.006013 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ced66ac8-f918-4e87-97a3-aafcae5e3866-registration-dir\") pod \"csi-node-driver-fgdrq\" (UID: \"ced66ac8-f918-4e87-97a3-aafcae5e3866\") " pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:49:58.007552 kubelet[3348]: E1112 20:49:58.006767 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.007552 kubelet[3348]: W1112 20:49:58.006784 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.012238 kubelet[3348]: E1112 20:49:58.008299 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.012238 kubelet[3348]: E1112 20:49:58.008507 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.012238 kubelet[3348]: W1112 20:49:58.008520 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.012238 kubelet[3348]: E1112 20:49:58.008537 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.012498 kubelet[3348]: E1112 20:49:58.012400 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.012498 kubelet[3348]: W1112 20:49:58.012421 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.012498 kubelet[3348]: E1112 20:49:58.012444 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.022611 systemd[1]: Started cri-containerd-8cc72cad3afceae5ced5ed320c1a9a3e87a44ae2dff21c928ed6862cc3c7cf47.scope - libcontainer container 8cc72cad3afceae5ced5ed320c1a9a3e87a44ae2dff21c928ed6862cc3c7cf47. Nov 12 20:49:58.027899 kubelet[3348]: E1112 20:49:58.027754 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.027899 kubelet[3348]: W1112 20:49:58.027789 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.027899 kubelet[3348]: E1112 20:49:58.027816 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.031852 kubelet[3348]: E1112 20:49:58.031824 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.031852 kubelet[3348]: W1112 20:49:58.031849 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.032126 kubelet[3348]: E1112 20:49:58.031873 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.033187 kubelet[3348]: E1112 20:49:58.033055 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.033187 kubelet[3348]: W1112 20:49:58.033188 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.033393 kubelet[3348]: E1112 20:49:58.033210 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.109695 containerd[1977]: time="2024-11-12T20:49:58.108712729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:58.109695 containerd[1977]: time="2024-11-12T20:49:58.108812017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:58.109695 containerd[1977]: time="2024-11-12T20:49:58.108850109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:58.112041 containerd[1977]: time="2024-11-12T20:49:58.109946334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:58.130393 kubelet[3348]: E1112 20:49:58.129090 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.130393 kubelet[3348]: W1112 20:49:58.129116 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.130393 kubelet[3348]: E1112 20:49:58.129232 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.130393 kubelet[3348]: E1112 20:49:58.129677 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.130393 kubelet[3348]: W1112 20:49:58.129690 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.130993 kubelet[3348]: E1112 20:49:58.130965 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.131418 kubelet[3348]: E1112 20:49:58.131397 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.131676 kubelet[3348]: W1112 20:49:58.131417 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.131676 kubelet[3348]: E1112 20:49:58.131477 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.133248 kubelet[3348]: E1112 20:49:58.133222 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.133323 kubelet[3348]: W1112 20:49:58.133257 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.133323 kubelet[3348]: E1112 20:49:58.133275 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.133651 kubelet[3348]: E1112 20:49:58.133562 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.133651 kubelet[3348]: W1112 20:49:58.133592 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.133651 kubelet[3348]: E1112 20:49:58.133632 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.134079 kubelet[3348]: E1112 20:49:58.134001 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.134079 kubelet[3348]: W1112 20:49:58.134012 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.134079 kubelet[3348]: E1112 20:49:58.134025 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.136506 kubelet[3348]: E1112 20:49:58.136258 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.136506 kubelet[3348]: W1112 20:49:58.136273 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.136506 kubelet[3348]: E1112 20:49:58.136288 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.136798 kubelet[3348]: E1112 20:49:58.136746 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.136798 kubelet[3348]: W1112 20:49:58.136759 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.136798 kubelet[3348]: E1112 20:49:58.136791 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.139310 kubelet[3348]: E1112 20:49:58.138461 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.139310 kubelet[3348]: W1112 20:49:58.138475 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.139310 kubelet[3348]: E1112 20:49:58.138489 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.139310 kubelet[3348]: E1112 20:49:58.138714 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.139310 kubelet[3348]: W1112 20:49:58.138723 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.139310 kubelet[3348]: E1112 20:49:58.138734 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.148758 kubelet[3348]: E1112 20:49:58.146948 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.148758 kubelet[3348]: W1112 20:49:58.146975 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.148758 kubelet[3348]: E1112 20:49:58.146999 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.148758 kubelet[3348]: E1112 20:49:58.147390 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.148758 kubelet[3348]: W1112 20:49:58.147405 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.148758 kubelet[3348]: E1112 20:49:58.147421 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.150747 kubelet[3348]: E1112 20:49:58.149127 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.150747 kubelet[3348]: W1112 20:49:58.149142 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.150747 kubelet[3348]: E1112 20:49:58.149159 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.151609 systemd[1]: Started cri-containerd-79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26.scope - libcontainer container 79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26. Nov 12 20:49:58.160207 kubelet[3348]: E1112 20:49:58.159566 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.160207 kubelet[3348]: W1112 20:49:58.159593 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.160207 kubelet[3348]: E1112 20:49:58.159620 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.160207 kubelet[3348]: E1112 20:49:58.159993 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.160207 kubelet[3348]: W1112 20:49:58.160006 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.160207 kubelet[3348]: E1112 20:49:58.160023 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.161340 kubelet[3348]: E1112 20:49:58.160695 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.161340 kubelet[3348]: W1112 20:49:58.160711 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.161340 kubelet[3348]: E1112 20:49:58.160726 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.161340 kubelet[3348]: E1112 20:49:58.161029 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.161340 kubelet[3348]: W1112 20:49:58.161041 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.161340 kubelet[3348]: E1112 20:49:58.161053 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.161340 kubelet[3348]: E1112 20:49:58.161265 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.161340 kubelet[3348]: W1112 20:49:58.161274 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.161340 kubelet[3348]: E1112 20:49:58.161286 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.161825 kubelet[3348]: E1112 20:49:58.161475 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.161825 kubelet[3348]: W1112 20:49:58.161484 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.161825 kubelet[3348]: E1112 20:49:58.161495 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.161825 kubelet[3348]: E1112 20:49:58.161665 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.161825 kubelet[3348]: W1112 20:49:58.161673 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.161825 kubelet[3348]: E1112 20:49:58.161683 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.164115 kubelet[3348]: E1112 20:49:58.163919 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.164115 kubelet[3348]: W1112 20:49:58.163938 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.164378 kubelet[3348]: E1112 20:49:58.164126 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.164461 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.165471 kubelet[3348]: W1112 20:49:58.164474 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.164487 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.164675 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.165471 kubelet[3348]: W1112 20:49:58.164684 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.164694 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.164909 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.165471 kubelet[3348]: W1112 20:49:58.164919 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.164930 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.165471 kubelet[3348]: E1112 20:49:58.165164 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.167064 kubelet[3348]: W1112 20:49:58.165174 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.167064 kubelet[3348]: E1112 20:49:58.165184 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.213237 kubelet[3348]: E1112 20:49:58.213210 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:49:58.213590 kubelet[3348]: W1112 20:49:58.213568 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:49:58.213722 kubelet[3348]: E1112 20:49:58.213707 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:49:58.271028 containerd[1977]: time="2024-11-12T20:49:58.270346893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27qdb,Uid:fe93d91a-2352-41b3-806b-20d080484120,Namespace:calico-system,Attempt:0,} returns sandbox id \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\"" Nov 12 20:49:58.274978 containerd[1977]: time="2024-11-12T20:49:58.274055790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:49:58.340412 containerd[1977]: time="2024-11-12T20:49:58.340180319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7866565fd4-5pwnl,Uid:bbdfce8d-b954-4f13-8a2a-92898704adb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cc72cad3afceae5ced5ed320c1a9a3e87a44ae2dff21c928ed6862cc3c7cf47\"" Nov 12 20:49:59.416295 kubelet[3348]: E1112 20:49:59.416235 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:49:59.795769 containerd[1977]: time="2024-11-12T20:49:59.795721545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:59.797389 containerd[1977]: time="2024-11-12T20:49:59.797188758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:49:59.799102 containerd[1977]: time="2024-11-12T20:49:59.798869688Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:59.802022 containerd[1977]: time="2024-11-12T20:49:59.801984005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:59.803292 containerd[1977]: time="2024-11-12T20:49:59.802723515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.528618979s" Nov 12 20:49:59.803292 containerd[1977]: time="2024-11-12T20:49:59.802765151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:49:59.805041 containerd[1977]: time="2024-11-12T20:49:59.805009914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:49:59.806325 containerd[1977]: time="2024-11-12T20:49:59.806292710Z" level=info msg="CreateContainer within sandbox \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:49:59.854730 containerd[1977]: time="2024-11-12T20:49:59.854125257Z" level=info msg="CreateContainer within sandbox \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5\"" Nov 12 20:49:59.860347 containerd[1977]: time="2024-11-12T20:49:59.858194201Z" level=info msg="StartContainer for \"707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5\"" Nov 12 20:49:59.978147 systemd[1]: Started cri-containerd-707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5.scope - libcontainer container 707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5. Nov 12 20:50:00.106308 containerd[1977]: time="2024-11-12T20:50:00.106189533Z" level=info msg="StartContainer for \"707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5\" returns successfully" Nov 12 20:50:00.137964 systemd[1]: cri-containerd-707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5.scope: Deactivated successfully. Nov 12 20:50:00.180728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5-rootfs.mount: Deactivated successfully. Nov 12 20:50:00.253686 containerd[1977]: time="2024-11-12T20:50:00.253451538Z" level=info msg="shim disconnected" id=707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5 namespace=k8s.io Nov 12 20:50:00.253686 containerd[1977]: time="2024-11-12T20:50:00.253686118Z" level=warning msg="cleaning up after shim disconnected" id=707a4d137467495e437a7c938521aa0c3f9361f61140b94d64b0231db54457f5 namespace=k8s.io Nov 12 20:50:00.254262 containerd[1977]: time="2024-11-12T20:50:00.253700679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:50:01.409514 kubelet[3348]: E1112 20:50:01.407098 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:03.405914 kubelet[3348]: E1112 20:50:03.405589 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:03.701615 containerd[1977]: time="2024-11-12T20:50:03.701481210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:03.703572 containerd[1977]: time="2024-11-12T20:50:03.703377553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:50:03.712912 containerd[1977]: time="2024-11-12T20:50:03.712286875Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:03.725054 containerd[1977]: time="2024-11-12T20:50:03.724998087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:03.728663 containerd[1977]: time="2024-11-12T20:50:03.728577377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 3.9235228s" Nov 12 20:50:03.728969 containerd[1977]: time="2024-11-12T20:50:03.728843872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:50:03.743836 containerd[1977]: time="2024-11-12T20:50:03.743183668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:50:03.769608 containerd[1977]: time="2024-11-12T20:50:03.769566336Z" level=info msg="CreateContainer within sandbox \"8cc72cad3afceae5ced5ed320c1a9a3e87a44ae2dff21c928ed6862cc3c7cf47\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:50:03.798708 containerd[1977]: time="2024-11-12T20:50:03.798106878Z" level=info msg="CreateContainer within sandbox \"8cc72cad3afceae5ced5ed320c1a9a3e87a44ae2dff21c928ed6862cc3c7cf47\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fa04680ea33a1c99f2b903dad404c4fd75e98365ac5845c313ded44986e93ef3\"" Nov 12 20:50:03.799686 containerd[1977]: time="2024-11-12T20:50:03.799649111Z" level=info msg="StartContainer for \"fa04680ea33a1c99f2b903dad404c4fd75e98365ac5845c313ded44986e93ef3\"" Nov 12 20:50:04.021587 systemd[1]: Started cri-containerd-fa04680ea33a1c99f2b903dad404c4fd75e98365ac5845c313ded44986e93ef3.scope - libcontainer container fa04680ea33a1c99f2b903dad404c4fd75e98365ac5845c313ded44986e93ef3. Nov 12 20:50:04.339613 containerd[1977]: time="2024-11-12T20:50:04.339184566Z" level=info msg="StartContainer for \"fa04680ea33a1c99f2b903dad404c4fd75e98365ac5845c313ded44986e93ef3\" returns successfully" Nov 12 20:50:04.794644 kubelet[3348]: I1112 20:50:04.794330 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7866565fd4-5pwnl" podStartSLOduration=2.410258679 podStartE2EDuration="7.794309539s" podCreationTimestamp="2024-11-12 20:49:57 +0000 UTC" firstStartedPulling="2024-11-12 20:49:58.349298821 +0000 UTC m=+15.245019233" lastFinishedPulling="2024-11-12 20:50:03.733349664 +0000 UTC m=+20.629070093" observedRunningTime="2024-11-12 20:50:04.746043421 +0000 UTC m=+21.641763855" watchObservedRunningTime="2024-11-12 20:50:04.794309539 +0000 UTC m=+21.690029971" Nov 12 20:50:05.408852 kubelet[3348]: E1112 20:50:05.408799 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:07.403864 kubelet[3348]: E1112 20:50:07.402146 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:09.402301 kubelet[3348]: E1112 20:50:09.402209 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:09.472357 containerd[1977]: time="2024-11-12T20:50:09.472081762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:09.473984 containerd[1977]: time="2024-11-12T20:50:09.473903229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:50:09.474657 containerd[1977]: time="2024-11-12T20:50:09.474624672Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:09.477658 containerd[1977]: time="2024-11-12T20:50:09.477609399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:09.483349 containerd[1977]: time="2024-11-12T20:50:09.483302998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.740065051s" Nov 12 20:50:09.483469 containerd[1977]: time="2024-11-12T20:50:09.483344313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:50:09.491207 containerd[1977]: time="2024-11-12T20:50:09.491165816Z" level=info msg="CreateContainer within sandbox \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:50:09.525521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801557281.mount: Deactivated successfully. Nov 12 20:50:09.527382 containerd[1977]: time="2024-11-12T20:50:09.527337994Z" level=info msg="CreateContainer within sandbox \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46\"" Nov 12 20:50:09.529237 containerd[1977]: time="2024-11-12T20:50:09.529199011Z" level=info msg="StartContainer for \"6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46\"" Nov 12 20:50:09.594636 systemd[1]: run-containerd-runc-k8s.io-6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46-runc.TEWlMF.mount: Deactivated successfully. Nov 12 20:50:09.605105 systemd[1]: Started cri-containerd-6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46.scope - libcontainer container 6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46. Nov 12 20:50:09.646657 containerd[1977]: time="2024-11-12T20:50:09.646611746Z" level=info msg="StartContainer for \"6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46\" returns successfully" Nov 12 20:50:11.031461 systemd[1]: cri-containerd-6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46.scope: Deactivated successfully. Nov 12 20:50:11.086956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46-rootfs.mount: Deactivated successfully. Nov 12 20:50:11.096495 containerd[1977]: time="2024-11-12T20:50:11.096427212Z" level=info msg="shim disconnected" id=6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46 namespace=k8s.io Nov 12 20:50:11.097512 containerd[1977]: time="2024-11-12T20:50:11.097081804Z" level=warning msg="cleaning up after shim disconnected" id=6f6c13506699f8e06b99501880d562440024742f58d4e104039ad630911c8c46 namespace=k8s.io Nov 12 20:50:11.097512 containerd[1977]: time="2024-11-12T20:50:11.097111733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:50:11.166361 kubelet[3348]: I1112 20:50:11.165495 3348 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:50:11.221802 systemd[1]: Created slice kubepods-besteffort-pod5024c148_6455_4352_9054_dbb7e7eda5a0.slice - libcontainer container kubepods-besteffort-pod5024c148_6455_4352_9054_dbb7e7eda5a0.slice. Nov 12 20:50:11.245545 kubelet[3348]: I1112 20:50:11.245513 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5024c148-6455-4352-9054-dbb7e7eda5a0-calico-apiserver-certs\") pod \"calico-apiserver-76bdbb8dbb-zplh5\" (UID: \"5024c148-6455-4352-9054-dbb7e7eda5a0\") " pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" Nov 12 20:50:11.245921 kubelet[3348]: I1112 20:50:11.245813 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sgdg\" (UniqueName: \"kubernetes.io/projected/0e54a56b-0402-45a2-887b-cb2004ed0683-kube-api-access-8sgdg\") pod \"calico-kube-controllers-76ddd8f86f-mvf9r\" (UID: \"0e54a56b-0402-45a2-887b-cb2004ed0683\") " pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" Nov 12 20:50:11.246262 kubelet[3348]: I1112 20:50:11.246159 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndq8g\" (UniqueName: \"kubernetes.io/projected/68c134a7-a52b-440c-8529-b32527bd916b-kube-api-access-ndq8g\") pod \"coredns-6f6b679f8f-wl4cj\" (UID: \"68c134a7-a52b-440c-8529-b32527bd916b\") " pod="kube-system/coredns-6f6b679f8f-wl4cj" Nov 12 20:50:11.246477 kubelet[3348]: I1112 20:50:11.246379 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e54a56b-0402-45a2-887b-cb2004ed0683-tigera-ca-bundle\") pod \"calico-kube-controllers-76ddd8f86f-mvf9r\" (UID: \"0e54a56b-0402-45a2-887b-cb2004ed0683\") " pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" Nov 12 20:50:11.246662 kubelet[3348]: I1112 20:50:11.246564 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/668ad4dc-2321-4d8c-901f-2cce25a4f11a-config-volume\") pod \"coredns-6f6b679f8f-qjwkb\" (UID: \"668ad4dc-2321-4d8c-901f-2cce25a4f11a\") " pod="kube-system/coredns-6f6b679f8f-qjwkb" Nov 12 20:50:11.246759 kubelet[3348]: I1112 20:50:11.246746 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmjj\" (UniqueName: \"kubernetes.io/projected/668ad4dc-2321-4d8c-901f-2cce25a4f11a-kube-api-access-wmmjj\") pod \"coredns-6f6b679f8f-qjwkb\" (UID: \"668ad4dc-2321-4d8c-901f-2cce25a4f11a\") " pod="kube-system/coredns-6f6b679f8f-qjwkb" Nov 12 20:50:11.246948 kubelet[3348]: I1112 20:50:11.246931 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492hn\" (UniqueName: \"kubernetes.io/projected/fecb9989-c942-4a9c-aaee-09d0b8038260-kube-api-access-492hn\") pod \"calico-apiserver-76bdbb8dbb-dsnm6\" (UID: \"fecb9989-c942-4a9c-aaee-09d0b8038260\") " pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" Nov 12 20:50:11.247174 kubelet[3348]: I1112 20:50:11.247147 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68c134a7-a52b-440c-8529-b32527bd916b-config-volume\") pod \"coredns-6f6b679f8f-wl4cj\" (UID: \"68c134a7-a52b-440c-8529-b32527bd916b\") " pod="kube-system/coredns-6f6b679f8f-wl4cj" Nov 12 20:50:11.247847 kubelet[3348]: I1112 20:50:11.247757 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm9q\" (UniqueName: \"kubernetes.io/projected/5024c148-6455-4352-9054-dbb7e7eda5a0-kube-api-access-dlm9q\") pod \"calico-apiserver-76bdbb8dbb-zplh5\" (UID: \"5024c148-6455-4352-9054-dbb7e7eda5a0\") " pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" Nov 12 20:50:11.247847 kubelet[3348]: I1112 20:50:11.247797 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fecb9989-c942-4a9c-aaee-09d0b8038260-calico-apiserver-certs\") pod \"calico-apiserver-76bdbb8dbb-dsnm6\" (UID: \"fecb9989-c942-4a9c-aaee-09d0b8038260\") " pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" Nov 12 20:50:11.250568 systemd[1]: Created slice kubepods-besteffort-pod0e54a56b_0402_45a2_887b_cb2004ed0683.slice - libcontainer container kubepods-besteffort-pod0e54a56b_0402_45a2_887b_cb2004ed0683.slice. Nov 12 20:50:11.267234 systemd[1]: Created slice kubepods-burstable-pod668ad4dc_2321_4d8c_901f_2cce25a4f11a.slice - libcontainer container kubepods-burstable-pod668ad4dc_2321_4d8c_901f_2cce25a4f11a.slice. Nov 12 20:50:11.279742 systemd[1]: Created slice kubepods-besteffort-podfecb9989_c942_4a9c_aaee_09d0b8038260.slice - libcontainer container kubepods-besteffort-podfecb9989_c942_4a9c_aaee_09d0b8038260.slice. Nov 12 20:50:11.290043 systemd[1]: Created slice kubepods-burstable-pod68c134a7_a52b_440c_8529_b32527bd916b.slice - libcontainer container kubepods-burstable-pod68c134a7_a52b_440c_8529_b32527bd916b.slice. Nov 12 20:50:11.415267 systemd[1]: Created slice kubepods-besteffort-podced66ac8_f918_4e87_97a3_aafcae5e3866.slice - libcontainer container kubepods-besteffort-podced66ac8_f918_4e87_97a3_aafcae5e3866.slice. Nov 12 20:50:11.417824 containerd[1977]: time="2024-11-12T20:50:11.417784540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgdrq,Uid:ced66ac8-f918-4e87-97a3-aafcae5e3866,Namespace:calico-system,Attempt:0,}" Nov 12 20:50:11.538428 containerd[1977]: time="2024-11-12T20:50:11.538373741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-zplh5,Uid:5024c148-6455-4352-9054-dbb7e7eda5a0,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:50:11.571535 containerd[1977]: time="2024-11-12T20:50:11.570007154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddd8f86f-mvf9r,Uid:0e54a56b-0402-45a2-887b-cb2004ed0683,Namespace:calico-system,Attempt:0,}" Nov 12 20:50:11.592387 containerd[1977]: time="2024-11-12T20:50:11.592329675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-dsnm6,Uid:fecb9989-c942-4a9c-aaee-09d0b8038260,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:50:11.593170 containerd[1977]: time="2024-11-12T20:50:11.593120643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qjwkb,Uid:668ad4dc-2321-4d8c-901f-2cce25a4f11a,Namespace:kube-system,Attempt:0,}" Nov 12 20:50:11.615487 containerd[1977]: time="2024-11-12T20:50:11.614798346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wl4cj,Uid:68c134a7-a52b-440c-8529-b32527bd916b,Namespace:kube-system,Attempt:0,}" Nov 12 20:50:11.825135 containerd[1977]: time="2024-11-12T20:50:11.824639113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:50:11.937644 containerd[1977]: time="2024-11-12T20:50:11.937580159Z" level=error msg="Failed to destroy network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.940096 containerd[1977]: time="2024-11-12T20:50:11.938633720Z" level=error msg="Failed to destroy network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.944320 containerd[1977]: time="2024-11-12T20:50:11.944262782Z" level=error msg="encountered an error cleaning up failed sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.944571 containerd[1977]: time="2024-11-12T20:50:11.944542185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-zplh5,Uid:5024c148-6455-4352-9054-dbb7e7eda5a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.952762 containerd[1977]: time="2024-11-12T20:50:11.952076301Z" level=error msg="encountered an error cleaning up failed sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.954842 containerd[1977]: time="2024-11-12T20:50:11.954122417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgdrq,Uid:ced66ac8-f918-4e87-97a3-aafcae5e3866,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.956807 kubelet[3348]: E1112 20:50:11.956525 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.956807 kubelet[3348]: E1112 20:50:11.956613 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:50:11.956807 kubelet[3348]: E1112 20:50:11.956643 3348 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgdrq" Nov 12 20:50:11.957175 kubelet[3348]: E1112 20:50:11.956694 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgdrq_calico-system(ced66ac8-f918-4e87-97a3-aafcae5e3866)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgdrq_calico-system(ced66ac8-f918-4e87-97a3-aafcae5e3866)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:11.957175 kubelet[3348]: E1112 20:50:11.957019 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:11.957175 kubelet[3348]: E1112 20:50:11.957064 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" Nov 12 20:50:11.957355 kubelet[3348]: E1112 20:50:11.957088 3348 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" Nov 12 20:50:11.957355 kubelet[3348]: E1112 20:50:11.957128 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bdbb8dbb-zplh5_calico-apiserver(5024c148-6455-4352-9054-dbb7e7eda5a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bdbb8dbb-zplh5_calico-apiserver(5024c148-6455-4352-9054-dbb7e7eda5a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" podUID="5024c148-6455-4352-9054-dbb7e7eda5a0" Nov 12 20:50:12.061065 containerd[1977]: time="2024-11-12T20:50:12.061014752Z" level=error msg="Failed to destroy network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.061571 containerd[1977]: time="2024-11-12T20:50:12.061528430Z" level=error msg="encountered an error cleaning up failed sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.061703 containerd[1977]: time="2024-11-12T20:50:12.061601301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddd8f86f-mvf9r,Uid:0e54a56b-0402-45a2-887b-cb2004ed0683,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.062068 kubelet[3348]: E1112 20:50:12.062032 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.062179 kubelet[3348]: E1112 20:50:12.062104 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" Nov 12 20:50:12.062179 kubelet[3348]: E1112 20:50:12.062132 3348 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" Nov 12 20:50:12.062332 kubelet[3348]: E1112 20:50:12.062195 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ddd8f86f-mvf9r_calico-system(0e54a56b-0402-45a2-887b-cb2004ed0683)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ddd8f86f-mvf9r_calico-system(0e54a56b-0402-45a2-887b-cb2004ed0683)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" podUID="0e54a56b-0402-45a2-887b-cb2004ed0683" Nov 12 20:50:12.138684 containerd[1977]: time="2024-11-12T20:50:12.138458974Z" level=error msg="Failed to destroy network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.146387 containerd[1977]: time="2024-11-12T20:50:12.145318523Z" level=error msg="encountered an error cleaning up failed sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.146387 containerd[1977]: time="2024-11-12T20:50:12.145997093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-dsnm6,Uid:fecb9989-c942-4a9c-aaee-09d0b8038260,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.148349 kubelet[3348]: E1112 20:50:12.146237 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.148349 kubelet[3348]: E1112 20:50:12.146301 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" Nov 12 20:50:12.148349 kubelet[3348]: E1112 20:50:12.146328 3348 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" Nov 12 20:50:12.148503 kubelet[3348]: E1112 20:50:12.146387 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bdbb8dbb-dsnm6_calico-apiserver(fecb9989-c942-4a9c-aaee-09d0b8038260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bdbb8dbb-dsnm6_calico-apiserver(fecb9989-c942-4a9c-aaee-09d0b8038260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" podUID="fecb9989-c942-4a9c-aaee-09d0b8038260" Nov 12 20:50:12.149146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21-shm.mount: Deactivated successfully. Nov 12 20:50:12.164572 containerd[1977]: time="2024-11-12T20:50:12.163461101Z" level=error msg="Failed to destroy network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.164572 containerd[1977]: time="2024-11-12T20:50:12.164073427Z" level=error msg="encountered an error cleaning up failed sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.164572 containerd[1977]: time="2024-11-12T20:50:12.164135120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qjwkb,Uid:668ad4dc-2321-4d8c-901f-2cce25a4f11a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.165060 kubelet[3348]: E1112 20:50:12.165023 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.166332 kubelet[3348]: E1112 20:50:12.165185 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qjwkb" Nov 12 20:50:12.166332 kubelet[3348]: E1112 20:50:12.165237 3348 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qjwkb" Nov 12 20:50:12.166332 kubelet[3348]: E1112 20:50:12.165293 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qjwkb_kube-system(668ad4dc-2321-4d8c-901f-2cce25a4f11a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qjwkb_kube-system(668ad4dc-2321-4d8c-901f-2cce25a4f11a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qjwkb" podUID="668ad4dc-2321-4d8c-901f-2cce25a4f11a" Nov 12 20:50:12.170690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46-shm.mount: Deactivated successfully. Nov 12 20:50:12.171114 containerd[1977]: time="2024-11-12T20:50:12.171074653Z" level=error msg="Failed to destroy network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.176587 containerd[1977]: time="2024-11-12T20:50:12.171717876Z" level=error msg="encountered an error cleaning up failed sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.177317 containerd[1977]: time="2024-11-12T20:50:12.177249056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wl4cj,Uid:68c134a7-a52b-440c-8529-b32527bd916b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.179404 kubelet[3348]: E1112 20:50:12.178277 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.179404 kubelet[3348]: E1112 20:50:12.179054 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wl4cj" Nov 12 20:50:12.179404 kubelet[3348]: E1112 20:50:12.179111 3348 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wl4cj" Nov 12 20:50:12.180345 kubelet[3348]: E1112 20:50:12.180268 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wl4cj_kube-system(68c134a7-a52b-440c-8529-b32527bd916b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wl4cj_kube-system(68c134a7-a52b-440c-8529-b32527bd916b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wl4cj" podUID="68c134a7-a52b-440c-8529-b32527bd916b" Nov 12 20:50:12.182782 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f-shm.mount: Deactivated successfully. Nov 12 20:50:12.807153 kubelet[3348]: I1112 20:50:12.807034 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:12.810974 kubelet[3348]: I1112 20:50:12.809541 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:12.816257 containerd[1977]: time="2024-11-12T20:50:12.816192019Z" level=info msg="StopPodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\"" Nov 12 20:50:12.819786 containerd[1977]: time="2024-11-12T20:50:12.819705808Z" level=info msg="Ensure that sandbox 3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3 in task-service has been cleanup successfully" Nov 12 20:50:12.826522 containerd[1977]: time="2024-11-12T20:50:12.826180849Z" level=info msg="StopPodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\"" Nov 12 20:50:12.826522 containerd[1977]: time="2024-11-12T20:50:12.826473168Z" level=info msg="Ensure that sandbox d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21 in task-service has been cleanup successfully" Nov 12 20:50:12.829235 kubelet[3348]: I1112 20:50:12.827264 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:12.831683 containerd[1977]: time="2024-11-12T20:50:12.831458484Z" level=info msg="StopPodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\"" Nov 12 20:50:12.832708 containerd[1977]: time="2024-11-12T20:50:12.832670600Z" level=info msg="Ensure that sandbox 0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb in task-service has been cleanup successfully" Nov 12 20:50:12.838442 kubelet[3348]: I1112 20:50:12.837030 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:12.839819 containerd[1977]: time="2024-11-12T20:50:12.839781388Z" level=info msg="StopPodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\"" Nov 12 20:50:12.841148 containerd[1977]: time="2024-11-12T20:50:12.840931636Z" level=info msg="Ensure that sandbox 751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0 in task-service has been cleanup successfully" Nov 12 20:50:12.846807 kubelet[3348]: I1112 20:50:12.846641 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:12.849792 containerd[1977]: time="2024-11-12T20:50:12.849738295Z" level=info msg="StopPodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\"" Nov 12 20:50:12.850166 containerd[1977]: time="2024-11-12T20:50:12.850138557Z" level=info msg="Ensure that sandbox ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f in task-service has been cleanup successfully" Nov 12 20:50:12.864414 kubelet[3348]: I1112 20:50:12.864048 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:12.870252 containerd[1977]: time="2024-11-12T20:50:12.870209367Z" level=info msg="StopPodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\"" Nov 12 20:50:12.870997 containerd[1977]: time="2024-11-12T20:50:12.870961053Z" level=info msg="Ensure that sandbox d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46 in task-service has been cleanup successfully" Nov 12 20:50:12.980978 containerd[1977]: time="2024-11-12T20:50:12.980781254Z" level=error msg="StopPodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" failed" error="failed to destroy network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.981469 kubelet[3348]: E1112 20:50:12.981104 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:12.981469 kubelet[3348]: E1112 20:50:12.981177 3348 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb"} Nov 12 20:50:12.981469 kubelet[3348]: E1112 20:50:12.981303 3348 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5024c148-6455-4352-9054-dbb7e7eda5a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:50:12.981469 kubelet[3348]: E1112 20:50:12.981336 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5024c148-6455-4352-9054-dbb7e7eda5a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" podUID="5024c148-6455-4352-9054-dbb7e7eda5a0" Nov 12 20:50:12.989592 containerd[1977]: time="2024-11-12T20:50:12.989512626Z" level=error msg="StopPodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" failed" error="failed to destroy network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:12.990297 kubelet[3348]: E1112 20:50:12.990076 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:12.990297 kubelet[3348]: E1112 20:50:12.990138 3348 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46"} Nov 12 20:50:12.990297 kubelet[3348]: E1112 20:50:12.990178 3348 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"668ad4dc-2321-4d8c-901f-2cce25a4f11a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:50:12.990297 kubelet[3348]: E1112 20:50:12.990222 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"668ad4dc-2321-4d8c-901f-2cce25a4f11a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qjwkb" podUID="668ad4dc-2321-4d8c-901f-2cce25a4f11a" Nov 12 20:50:13.017634 containerd[1977]: time="2024-11-12T20:50:13.017411503Z" level=error msg="StopPodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" failed" error="failed to destroy network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:13.018569 kubelet[3348]: E1112 20:50:13.018355 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:13.018569 kubelet[3348]: E1112 20:50:13.018427 3348 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3"} Nov 12 20:50:13.018569 kubelet[3348]: E1112 20:50:13.018477 3348 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e54a56b-0402-45a2-887b-cb2004ed0683\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:50:13.018569 kubelet[3348]: E1112 20:50:13.018524 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e54a56b-0402-45a2-887b-cb2004ed0683\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" podUID="0e54a56b-0402-45a2-887b-cb2004ed0683" Nov 12 20:50:13.019447 containerd[1977]: time="2024-11-12T20:50:13.019309841Z" level=error msg="StopPodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" failed" error="failed to destroy network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:13.020144 kubelet[3348]: E1112 20:50:13.019858 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:13.020144 kubelet[3348]: E1112 20:50:13.020012 3348 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0"} Nov 12 20:50:13.020144 kubelet[3348]: E1112 20:50:13.020069 3348 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ced66ac8-f918-4e87-97a3-aafcae5e3866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:50:13.020144 kubelet[3348]: E1112 20:50:13.020100 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced66ac8-f918-4e87-97a3-aafcae5e3866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgdrq" podUID="ced66ac8-f918-4e87-97a3-aafcae5e3866" Nov 12 20:50:13.038155 containerd[1977]: time="2024-11-12T20:50:13.038100287Z" level=error msg="StopPodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" failed" error="failed to destroy network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:13.038423 kubelet[3348]: E1112 20:50:13.038346 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:13.038532 kubelet[3348]: E1112 20:50:13.038421 3348 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f"} Nov 12 20:50:13.038532 kubelet[3348]: E1112 20:50:13.038464 3348 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68c134a7-a52b-440c-8529-b32527bd916b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:50:13.038532 kubelet[3348]: E1112 20:50:13.038494 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68c134a7-a52b-440c-8529-b32527bd916b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wl4cj" podUID="68c134a7-a52b-440c-8529-b32527bd916b" Nov 12 20:50:13.040049 containerd[1977]: time="2024-11-12T20:50:13.040001230Z" level=error msg="StopPodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" failed" error="failed to destroy network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:50:13.040392 kubelet[3348]: E1112 20:50:13.040351 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:13.040611 kubelet[3348]: E1112 20:50:13.040403 3348 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21"} Nov 12 20:50:13.040992 kubelet[3348]: E1112 20:50:13.040953 3348 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fecb9989-c942-4a9c-aaee-09d0b8038260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:50:13.041121 kubelet[3348]: E1112 20:50:13.040996 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fecb9989-c942-4a9c-aaee-09d0b8038260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" podUID="fecb9989-c942-4a9c-aaee-09d0b8038260" Nov 12 20:50:20.302078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424313222.mount: Deactivated successfully. Nov 12 20:50:20.440559 containerd[1977]: time="2024-11-12T20:50:20.438866657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:50:20.460242 containerd[1977]: time="2024-11-12T20:50:20.460187764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 8.631530351s" Nov 12 20:50:20.460242 containerd[1977]: time="2024-11-12T20:50:20.460234110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:50:20.468992 containerd[1977]: time="2024-11-12T20:50:20.468942634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:20.501848 containerd[1977]: time="2024-11-12T20:50:20.501789165Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:20.502753 containerd[1977]: time="2024-11-12T20:50:20.502716280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:20.685336 containerd[1977]: time="2024-11-12T20:50:20.685158148Z" level=info msg="CreateContainer within sandbox \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:50:20.764116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918092371.mount: Deactivated successfully. Nov 12 20:50:20.792065 containerd[1977]: time="2024-11-12T20:50:20.792017897Z" level=info msg="CreateContainer within sandbox \"79060d54500fd5c26a970efe360ca9256e159c966fe947f18e646aaf9977af26\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea\"" Nov 12 20:50:20.796237 containerd[1977]: time="2024-11-12T20:50:20.796127636Z" level=info msg="StartContainer for \"547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea\"" Nov 12 20:50:21.050149 systemd[1]: Started cri-containerd-547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea.scope - libcontainer container 547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea. Nov 12 20:50:21.169582 containerd[1977]: time="2024-11-12T20:50:21.169267404Z" level=info msg="StartContainer for \"547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea\" returns successfully" Nov 12 20:50:21.385533 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:50:21.386226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Nov 12 20:50:23.087310 systemd[1]: run-containerd-runc-k8s.io-547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea-runc.H6ECnb.mount: Deactivated successfully. Nov 12 20:50:23.416749 containerd[1977]: time="2024-11-12T20:50:23.415417534Z" level=info msg="StopPodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\"" Nov 12 20:50:23.613758 kubelet[3348]: I1112 20:50:23.602924 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-27qdb" podStartSLOduration=4.345913496 podStartE2EDuration="26.576807633s" podCreationTimestamp="2024-11-12 20:49:57 +0000 UTC" firstStartedPulling="2024-11-12 20:49:58.272539802 +0000 UTC m=+15.168260216" lastFinishedPulling="2024-11-12 20:50:20.503433941 +0000 UTC m=+37.399154353" observedRunningTime="2024-11-12 20:50:22.170398163 +0000 UTC m=+39.066118589" watchObservedRunningTime="2024-11-12 20:50:23.576807633 +0000 UTC m=+40.472528066" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.575 [INFO][4571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.577 [INFO][4571] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" iface="eth0" netns="/var/run/netns/cni-e2710e4c-d3bc-12ff-596b-50336380c377" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.577 [INFO][4571] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" iface="eth0" netns="/var/run/netns/cni-e2710e4c-d3bc-12ff-596b-50336380c377" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.579 [INFO][4571] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" iface="eth0" netns="/var/run/netns/cni-e2710e4c-d3bc-12ff-596b-50336380c377" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.579 [INFO][4571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.579 [INFO][4571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.814 [INFO][4577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.817 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.818 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.835 [WARNING][4577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.835 [INFO][4577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.841 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:23.856240 containerd[1977]: 2024-11-12 20:50:23.850 [INFO][4571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:23.865080 systemd[1]: run-netns-cni\x2de2710e4c\x2dd3bc\x2d12ff\x2d596b\x2d50336380c377.mount: Deactivated successfully. Nov 12 20:50:23.897074 containerd[1977]: time="2024-11-12T20:50:23.896234355Z" level=info msg="TearDown network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" successfully" Nov 12 20:50:23.897074 containerd[1977]: time="2024-11-12T20:50:23.896967587Z" level=info msg="StopPodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" returns successfully" Nov 12 20:50:23.909579 containerd[1977]: time="2024-11-12T20:50:23.909046013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddd8f86f-mvf9r,Uid:0e54a56b-0402-45a2-887b-cb2004ed0683,Namespace:calico-system,Attempt:1,}" Nov 12 20:50:24.053995 kernel: bpftool[4632]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:50:24.226800 (udev-worker)[4403]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:50:24.233263 systemd-networkd[1811]: calif45bb6c2c37: Link UP Nov 12 20:50:24.234496 systemd-networkd[1811]: calif45bb6c2c37: Gained carrier Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.067 [INFO][4614] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0 calico-kube-controllers-76ddd8f86f- calico-system 0e54a56b-0402-45a2-887b-cb2004ed0683 743 0 2024-11-12 20:49:58 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76ddd8f86f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-74 calico-kube-controllers-76ddd8f86f-mvf9r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif45bb6c2c37 [] []}} ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.067 [INFO][4614] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.134 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" HandleID="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.152 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" HandleID="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040c820), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-74", "pod":"calico-kube-controllers-76ddd8f86f-mvf9r", "timestamp":"2024-11-12 20:50:24.134203013 +0000 UTC"}, Hostname:"ip-172-31-17-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.153 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.153 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.153 [INFO][4633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-74' Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.157 [INFO][4633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.173 [INFO][4633] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.183 [INFO][4633] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.186 [INFO][4633] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.190 [INFO][4633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.190 [INFO][4633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.192 [INFO][4633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.198 [INFO][4633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.208 [INFO][4633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.129/26] block=192.168.41.128/26 handle="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.209 [INFO][4633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.129/26] handle="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" host="ip-172-31-17-74" Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.209 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:24.268654 containerd[1977]: 2024-11-12 20:50:24.209 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.129/26] IPv6=[] ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" HandleID="k8s-pod-network.e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.271075 containerd[1977]: 2024-11-12 20:50:24.212 [INFO][4614] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0", GenerateName:"calico-kube-controllers-76ddd8f86f-", Namespace:"calico-system", SelfLink:"", UID:"0e54a56b-0402-45a2-887b-cb2004ed0683", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 58, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddd8f86f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"", Pod:"calico-kube-controllers-76ddd8f86f-mvf9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif45bb6c2c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:24.271075 containerd[1977]: 2024-11-12 20:50:24.212 [INFO][4614] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.129/32] ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.271075 containerd[1977]: 2024-11-12 20:50:24.212 [INFO][4614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif45bb6c2c37 ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.271075 containerd[1977]: 2024-11-12 20:50:24.235 [INFO][4614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.271075 containerd[1977]: 2024-11-12 20:50:24.238 [INFO][4614] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0", GenerateName:"calico-kube-controllers-76ddd8f86f-", Namespace:"calico-system", SelfLink:"", UID:"0e54a56b-0402-45a2-887b-cb2004ed0683", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 58, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddd8f86f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b", Pod:"calico-kube-controllers-76ddd8f86f-mvf9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif45bb6c2c37", MAC:"be:8c:1d:02:d5:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:24.271075 containerd[1977]: 2024-11-12 20:50:24.260 [INFO][4614] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b" Namespace="calico-system" Pod="calico-kube-controllers-76ddd8f86f-mvf9r" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:24.366172 containerd[1977]: time="2024-11-12T20:50:24.365604399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:24.366172 containerd[1977]: time="2024-11-12T20:50:24.365751933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:24.366172 containerd[1977]: time="2024-11-12T20:50:24.365784048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:24.366172 containerd[1977]: time="2024-11-12T20:50:24.366061659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:24.404121 systemd[1]: Started cri-containerd-e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b.scope - libcontainer container e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b. Nov 12 20:50:24.559029 containerd[1977]: time="2024-11-12T20:50:24.558984888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddd8f86f-mvf9r,Uid:0e54a56b-0402-45a2-887b-cb2004ed0683,Namespace:calico-system,Attempt:1,} returns sandbox id \"e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b\"" Nov 12 20:50:24.576233 containerd[1977]: time="2024-11-12T20:50:24.576130238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:50:24.649133 systemd-networkd[1811]: vxlan.calico: Link UP Nov 12 20:50:24.649142 systemd-networkd[1811]: vxlan.calico: Gained carrier Nov 12 20:50:24.697129 (udev-worker)[4404]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:50:25.403441 containerd[1977]: time="2024-11-12T20:50:25.403328926Z" level=info msg="StopPodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\"" Nov 12 20:50:25.416770 systemd-networkd[1811]: calif45bb6c2c37: Gained IPv6LL Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.534 [INFO][4775] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.534 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" iface="eth0" netns="/var/run/netns/cni-ac8a696e-8745-b819-ae56-0217d35fc74d" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.534 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" iface="eth0" netns="/var/run/netns/cni-ac8a696e-8745-b819-ae56-0217d35fc74d" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.535 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" iface="eth0" netns="/var/run/netns/cni-ac8a696e-8745-b819-ae56-0217d35fc74d" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.535 [INFO][4775] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.535 [INFO][4775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.574 [INFO][4781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.575 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.575 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.581 [WARNING][4781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.581 [INFO][4781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.584 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:25.588517 containerd[1977]: 2024-11-12 20:50:25.586 [INFO][4775] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:25.592128 containerd[1977]: time="2024-11-12T20:50:25.590509256Z" level=info msg="TearDown network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" successfully" Nov 12 20:50:25.592128 containerd[1977]: time="2024-11-12T20:50:25.590632657Z" level=info msg="StopPodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" returns successfully" Nov 12 20:50:25.594307 containerd[1977]: time="2024-11-12T20:50:25.594056862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-zplh5,Uid:5024c148-6455-4352-9054-dbb7e7eda5a0,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:50:25.594712 systemd[1]: run-netns-cni\x2dac8a696e\x2d8745\x2db819\x2dae56\x2d0217d35fc74d.mount: Deactivated successfully. Nov 12 20:50:25.907930 (udev-worker)[4728]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:50:25.910353 systemd-networkd[1811]: cali7234e6ca0ed: Link UP Nov 12 20:50:25.913663 systemd-networkd[1811]: cali7234e6ca0ed: Gained carrier Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.743 [INFO][4787] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0 calico-apiserver-76bdbb8dbb- calico-apiserver 5024c148-6455-4352-9054-dbb7e7eda5a0 753 0 2024-11-12 20:49:57 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bdbb8dbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-74 calico-apiserver-76bdbb8dbb-zplh5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7234e6ca0ed [] []}} ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.744 [INFO][4787] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.820 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" HandleID="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.849 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" HandleID="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-74", "pod":"calico-apiserver-76bdbb8dbb-zplh5", "timestamp":"2024-11-12 20:50:25.820094385 +0000 UTC"}, Hostname:"ip-172-31-17-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.849 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.850 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.850 [INFO][4798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-74' Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.854 [INFO][4798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.866 [INFO][4798] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.874 [INFO][4798] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.877 [INFO][4798] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.879 [INFO][4798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.879 [INFO][4798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.881 [INFO][4798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521 Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.889 [INFO][4798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.896 [INFO][4798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.130/26] block=192.168.41.128/26 handle="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.896 [INFO][4798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.130/26] handle="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" host="ip-172-31-17-74" Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.896 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:25.938906 containerd[1977]: 2024-11-12 20:50:25.896 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.130/26] IPv6=[] ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" HandleID="k8s-pod-network.fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.940550 containerd[1977]: 2024-11-12 20:50:25.899 [INFO][4787] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5024c148-6455-4352-9054-dbb7e7eda5a0", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"", Pod:"calico-apiserver-76bdbb8dbb-zplh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7234e6ca0ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:25.940550 containerd[1977]: 2024-11-12 20:50:25.899 [INFO][4787] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.130/32] ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.940550 containerd[1977]: 2024-11-12 20:50:25.899 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7234e6ca0ed ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.940550 containerd[1977]: 2024-11-12 20:50:25.907 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:25.940550 containerd[1977]: 2024-11-12 20:50:25.907 [INFO][4787] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5024c148-6455-4352-9054-dbb7e7eda5a0", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521", Pod:"calico-apiserver-76bdbb8dbb-zplh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7234e6ca0ed", MAC:"aa:6d:4e:0a:4e:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:25.940550 containerd[1977]: 2024-11-12 20:50:25.933 [INFO][4787] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-zplh5" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:26.047358 containerd[1977]: time="2024-11-12T20:50:26.038666793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:26.047358 containerd[1977]: time="2024-11-12T20:50:26.038757862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:26.047358 containerd[1977]: time="2024-11-12T20:50:26.038780633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:26.047358 containerd[1977]: time="2024-11-12T20:50:26.038905381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:26.120314 systemd[1]: Started cri-containerd-fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521.scope - libcontainer container fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521. Nov 12 20:50:26.209548 containerd[1977]: time="2024-11-12T20:50:26.209417276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-zplh5,Uid:5024c148-6455-4352-9054-dbb7e7eda5a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521\"" Nov 12 20:50:26.402850 containerd[1977]: time="2024-11-12T20:50:26.402796376Z" level=info msg="StopPodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\"" Nov 12 20:50:26.407527 containerd[1977]: time="2024-11-12T20:50:26.407095116Z" level=info msg="StopPodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\"" Nov 12 20:50:26.500970 systemd-networkd[1811]: vxlan.calico: Gained IPv6LL Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.672 [INFO][4888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.672 [INFO][4888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" iface="eth0" netns="/var/run/netns/cni-1e402551-8654-d53d-c3cd-dad5aab3acf4" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.672 [INFO][4888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" iface="eth0" netns="/var/run/netns/cni-1e402551-8654-d53d-c3cd-dad5aab3acf4" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.673 [INFO][4888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" iface="eth0" netns="/var/run/netns/cni-1e402551-8654-d53d-c3cd-dad5aab3acf4" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.673 [INFO][4888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.673 [INFO][4888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.754 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.755 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.755 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.790 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.795 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.814 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:26.855541 containerd[1977]: 2024-11-12 20:50:26.841 [INFO][4888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:26.870259 containerd[1977]: time="2024-11-12T20:50:26.870184712Z" level=info msg="TearDown network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" successfully" Nov 12 20:50:26.870551 containerd[1977]: time="2024-11-12T20:50:26.870346771Z" level=info msg="StopPodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" returns successfully" Nov 12 20:50:26.883656 containerd[1977]: time="2024-11-12T20:50:26.883379299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-dsnm6,Uid:fecb9989-c942-4a9c-aaee-09d0b8038260,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:50:26.884856 systemd[1]: run-netns-cni\x2d1e402551\x2d8654\x2dd53d\x2dc3cd\x2ddad5aab3acf4.mount: Deactivated successfully. Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.674 [INFO][4877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.675 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" iface="eth0" netns="/var/run/netns/cni-3d935f90-0a7e-aa23-2d42-7ecc191e1685" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.675 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" iface="eth0" netns="/var/run/netns/cni-3d935f90-0a7e-aa23-2d42-7ecc191e1685" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.675 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" iface="eth0" netns="/var/run/netns/cni-3d935f90-0a7e-aa23-2d42-7ecc191e1685" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.675 [INFO][4877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.675 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.848 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.851 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.851 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.898 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.898 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.900 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:26.911941 containerd[1977]: 2024-11-12 20:50:26.905 [INFO][4877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:26.914472 containerd[1977]: time="2024-11-12T20:50:26.913391990Z" level=info msg="TearDown network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" successfully" Nov 12 20:50:26.914472 containerd[1977]: time="2024-11-12T20:50:26.913436088Z" level=info msg="StopPodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" returns successfully" Nov 12 20:50:26.915651 containerd[1977]: time="2024-11-12T20:50:26.915397620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qjwkb,Uid:668ad4dc-2321-4d8c-901f-2cce25a4f11a,Namespace:kube-system,Attempt:1,}" Nov 12 20:50:26.919872 systemd[1]: run-netns-cni\x2d3d935f90\x2d0a7e\x2daa23\x2d2d42\x2d7ecc191e1685.mount: Deactivated successfully. Nov 12 20:50:27.239861 systemd-networkd[1811]: cali8a787e38ed2: Link UP Nov 12 20:50:27.244681 systemd-networkd[1811]: cali8a787e38ed2: Gained carrier Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.049 [INFO][4924] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0 coredns-6f6b679f8f- kube-system 668ad4dc-2321-4d8c-901f-2cce25a4f11a 763 0 2024-11-12 20:49:47 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-74 coredns-6f6b679f8f-qjwkb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a787e38ed2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.049 [INFO][4924] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.144 [INFO][4939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" HandleID="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.167 [INFO][4939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" HandleID="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051080), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-74", "pod":"coredns-6f6b679f8f-qjwkb", "timestamp":"2024-11-12 20:50:27.14406954 +0000 UTC"}, Hostname:"ip-172-31-17-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.167 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.168 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.168 [INFO][4939] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-74' Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.172 [INFO][4939] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.178 [INFO][4939] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.188 [INFO][4939] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.193 [INFO][4939] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.200 [INFO][4939] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.201 [INFO][4939] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.205 [INFO][4939] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168 Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.214 [INFO][4939] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.228 [INFO][4939] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.131/26] block=192.168.41.128/26 handle="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.228 [INFO][4939] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.131/26] handle="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" host="ip-172-31-17-74" Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.228 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:27.278660 containerd[1977]: 2024-11-12 20:50:27.228 [INFO][4939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.131/26] IPv6=[] ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" HandleID="k8s-pod-network.ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.279506 containerd[1977]: 2024-11-12 20:50:27.234 [INFO][4924] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"668ad4dc-2321-4d8c-901f-2cce25a4f11a", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"", Pod:"coredns-6f6b679f8f-qjwkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a787e38ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:27.279506 containerd[1977]: 2024-11-12 20:50:27.234 [INFO][4924] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.131/32] ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.279506 containerd[1977]: 2024-11-12 20:50:27.234 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a787e38ed2 ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.279506 containerd[1977]: 2024-11-12 20:50:27.238 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.279506 containerd[1977]: 2024-11-12 20:50:27.239 [INFO][4924] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"668ad4dc-2321-4d8c-901f-2cce25a4f11a", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168", Pod:"coredns-6f6b679f8f-qjwkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a787e38ed2", MAC:"8a:7d:51:d7:c1:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:27.279506 containerd[1977]: 2024-11-12 20:50:27.267 [INFO][4924] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168" Namespace="kube-system" Pod="coredns-6f6b679f8f-qjwkb" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:27.329182 systemd-networkd[1811]: cali7234e6ca0ed: Gained IPv6LL Nov 12 20:50:27.403342 containerd[1977]: time="2024-11-12T20:50:27.403292986Z" level=info msg="StopPodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\"" Nov 12 20:50:27.410211 containerd[1977]: time="2024-11-12T20:50:27.409575087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:27.410211 containerd[1977]: time="2024-11-12T20:50:27.409673920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:27.410211 containerd[1977]: time="2024-11-12T20:50:27.409698652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:27.410211 containerd[1977]: time="2024-11-12T20:50:27.409813451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:27.413222 systemd-networkd[1811]: califbf2b2574d6: Link UP Nov 12 20:50:27.417969 systemd-networkd[1811]: califbf2b2574d6: Gained carrier Nov 12 20:50:27.431249 containerd[1977]: time="2024-11-12T20:50:27.431204973Z" level=info msg="StopPodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\"" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.054 [INFO][4914] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0 calico-apiserver-76bdbb8dbb- calico-apiserver fecb9989-c942-4a9c-aaee-09d0b8038260 762 0 2024-11-12 20:49:57 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bdbb8dbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-74 calico-apiserver-76bdbb8dbb-dsnm6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califbf2b2574d6 [] []}} ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.057 [INFO][4914] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.178 [INFO][4943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" HandleID="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.205 [INFO][4943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" HandleID="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000395b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-74", "pod":"calico-apiserver-76bdbb8dbb-dsnm6", "timestamp":"2024-11-12 20:50:27.178698211 +0000 UTC"}, Hostname:"ip-172-31-17-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.205 [INFO][4943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.228 [INFO][4943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.228 [INFO][4943] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-74' Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.276 [INFO][4943] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.293 [INFO][4943] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.310 [INFO][4943] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.316 [INFO][4943] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.324 [INFO][4943] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.324 [INFO][4943] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.336 [INFO][4943] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9 Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.352 [INFO][4943] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.382 [INFO][4943] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.132/26] block=192.168.41.128/26 handle="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.383 [INFO][4943] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.132/26] handle="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" host="ip-172-31-17-74" Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.383 [INFO][4943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:27.498767 containerd[1977]: 2024-11-12 20:50:27.383 [INFO][4943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.132/26] IPv6=[] ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" HandleID="k8s-pod-network.a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.500090 containerd[1977]: 2024-11-12 20:50:27.400 [INFO][4914] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecb9989-c942-4a9c-aaee-09d0b8038260", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"", Pod:"calico-apiserver-76bdbb8dbb-dsnm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbf2b2574d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:27.500090 containerd[1977]: 2024-11-12 20:50:27.402 [INFO][4914] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.132/32] ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.500090 containerd[1977]: 2024-11-12 20:50:27.402 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbf2b2574d6 ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.500090 containerd[1977]: 2024-11-12 20:50:27.419 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.500090 containerd[1977]: 2024-11-12 20:50:27.420 [INFO][4914] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecb9989-c942-4a9c-aaee-09d0b8038260", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9", Pod:"calico-apiserver-76bdbb8dbb-dsnm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbf2b2574d6", MAC:"1a:92:2e:9f:c1:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:27.500090 containerd[1977]: 2024-11-12 20:50:27.459 [INFO][4914] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9" Namespace="calico-apiserver" Pod="calico-apiserver-76bdbb8dbb-dsnm6" WorkloadEndpoint="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:27.531458 systemd[1]: Started cri-containerd-ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168.scope - libcontainer container ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168. Nov 12 20:50:27.633327 containerd[1977]: time="2024-11-12T20:50:27.633231859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:27.633617 containerd[1977]: time="2024-11-12T20:50:27.633581736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:27.633804 containerd[1977]: time="2024-11-12T20:50:27.633777693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:27.634906 containerd[1977]: time="2024-11-12T20:50:27.634747800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:27.719998 containerd[1977]: time="2024-11-12T20:50:27.719453867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qjwkb,Uid:668ad4dc-2321-4d8c-901f-2cce25a4f11a,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168\"" Nov 12 20:50:27.731379 systemd[1]: Started cri-containerd-a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9.scope - libcontainer container a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9. Nov 12 20:50:27.738534 containerd[1977]: time="2024-11-12T20:50:27.737967854Z" level=info msg="CreateContainer within sandbox \"ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:50:27.790140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4108232771.mount: Deactivated successfully. Nov 12 20:50:27.825113 containerd[1977]: time="2024-11-12T20:50:27.823486984Z" level=info msg="CreateContainer within sandbox \"ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"284afe928ee644152140c0928e4443833dc43f7c9d21ff64aed38d2c7bbe8149\"" Nov 12 20:50:27.825113 containerd[1977]: time="2024-11-12T20:50:27.824854642Z" level=info msg="StartContainer for \"284afe928ee644152140c0928e4443833dc43f7c9d21ff64aed38d2c7bbe8149\"" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.650 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.657 [INFO][5018] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" iface="eth0" netns="/var/run/netns/cni-b76b5b8d-0450-f347-b1af-907503c9ef5a" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.660 [INFO][5018] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" iface="eth0" netns="/var/run/netns/cni-b76b5b8d-0450-f347-b1af-907503c9ef5a" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.661 [INFO][5018] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" iface="eth0" netns="/var/run/netns/cni-b76b5b8d-0450-f347-b1af-907503c9ef5a" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.661 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.661 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.844 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.849 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.849 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.874 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.874 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.883 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:27.917444 containerd[1977]: 2024-11-12 20:50:27.907 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:27.922732 containerd[1977]: time="2024-11-12T20:50:27.922355687Z" level=info msg="TearDown network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" successfully" Nov 12 20:50:27.923120 containerd[1977]: time="2024-11-12T20:50:27.923094923Z" level=info msg="StopPodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" returns successfully" Nov 12 20:50:27.923671 containerd[1977]: time="2024-11-12T20:50:27.923650216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bdbb8dbb-dsnm6,Uid:fecb9989-c942-4a9c-aaee-09d0b8038260,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9\"" Nov 12 20:50:27.925858 containerd[1977]: time="2024-11-12T20:50:27.925730480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wl4cj,Uid:68c134a7-a52b-440c-8529-b32527bd916b,Namespace:kube-system,Attempt:1,}" Nov 12 20:50:27.957788 systemd[1]: Started cri-containerd-284afe928ee644152140c0928e4443833dc43f7c9d21ff64aed38d2c7bbe8149.scope - libcontainer container 284afe928ee644152140c0928e4443833dc43f7c9d21ff64aed38d2c7bbe8149. Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.812 [INFO][5020] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.819 [INFO][5020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" iface="eth0" netns="/var/run/netns/cni-551e2dca-3e1a-7952-b91e-b8a5715095d8" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.820 [INFO][5020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" iface="eth0" netns="/var/run/netns/cni-551e2dca-3e1a-7952-b91e-b8a5715095d8" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.821 [INFO][5020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" iface="eth0" netns="/var/run/netns/cni-551e2dca-3e1a-7952-b91e-b8a5715095d8" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.821 [INFO][5020] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.821 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.979 [INFO][5095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.979 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:27.979 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:28.011 [WARNING][5095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:28.011 [INFO][5095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:28.020 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:28.033909 containerd[1977]: 2024-11-12 20:50:28.027 [INFO][5020] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:28.035055 containerd[1977]: time="2024-11-12T20:50:28.035014569Z" level=info msg="TearDown network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" successfully" Nov 12 20:50:28.035055 containerd[1977]: time="2024-11-12T20:50:28.035053721Z" level=info msg="StopPodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" returns successfully" Nov 12 20:50:28.043749 containerd[1977]: time="2024-11-12T20:50:28.042242280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgdrq,Uid:ced66ac8-f918-4e87-97a3-aafcae5e3866,Namespace:calico-system,Attempt:1,}" Nov 12 20:50:28.097659 containerd[1977]: time="2024-11-12T20:50:28.097614222Z" level=info msg="StartContainer for \"284afe928ee644152140c0928e4443833dc43f7c9d21ff64aed38d2c7bbe8149\" returns successfully" Nov 12 20:50:28.504020 systemd-networkd[1811]: cali80b4b80d7e8: Link UP Nov 12 20:50:28.504713 systemd-networkd[1811]: cali80b4b80d7e8: Gained carrier Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.094 [INFO][5140] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0 coredns-6f6b679f8f- kube-system 68c134a7-a52b-440c-8529-b32527bd916b 773 0 2024-11-12 20:49:47 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-74 coredns-6f6b679f8f-wl4cj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80b4b80d7e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.095 [INFO][5140] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.317 [INFO][5164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" HandleID="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.348 [INFO][5164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" HandleID="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315990), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-74", "pod":"coredns-6f6b679f8f-wl4cj", "timestamp":"2024-11-12 20:50:28.316849436 +0000 UTC"}, Hostname:"ip-172-31-17-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.349 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.350 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.351 [INFO][5164] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-74' Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.359 [INFO][5164] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.377 [INFO][5164] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.407 [INFO][5164] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.416 [INFO][5164] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.431 [INFO][5164] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.431 [INFO][5164] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.435 [INFO][5164] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.455 [INFO][5164] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.481 [INFO][5164] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.133/26] block=192.168.41.128/26 handle="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.481 [INFO][5164] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.133/26] handle="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" host="ip-172-31-17-74" Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.481 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:28.560955 containerd[1977]: 2024-11-12 20:50:28.482 [INFO][5164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.133/26] IPv6=[] ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" HandleID="k8s-pod-network.dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.563902 containerd[1977]: 2024-11-12 20:50:28.497 [INFO][5140] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"68c134a7-a52b-440c-8529-b32527bd916b", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"", Pod:"coredns-6f6b679f8f-wl4cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b4b80d7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:28.563902 containerd[1977]: 2024-11-12 20:50:28.498 [INFO][5140] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.133/32] ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.563902 containerd[1977]: 2024-11-12 20:50:28.498 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80b4b80d7e8 ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.563902 containerd[1977]: 2024-11-12 20:50:28.505 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.563902 containerd[1977]: 2024-11-12 20:50:28.505 [INFO][5140] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"68c134a7-a52b-440c-8529-b32527bd916b", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc", Pod:"coredns-6f6b679f8f-wl4cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b4b80d7e8", MAC:"4a:76:58:dc:68:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:28.563902 containerd[1977]: 2024-11-12 20:50:28.538 [INFO][5140] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-wl4cj" WorkloadEndpoint="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:28.611835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995260986.mount: Deactivated successfully. Nov 12 20:50:28.612395 systemd[1]: run-netns-cni\x2db76b5b8d\x2d0450\x2df347\x2db1af\x2d907503c9ef5a.mount: Deactivated successfully. Nov 12 20:50:28.612489 systemd[1]: run-netns-cni\x2d551e2dca\x2d3e1a\x2d7952\x2db91e\x2db8a5715095d8.mount: Deactivated successfully. Nov 12 20:50:28.636098 systemd-networkd[1811]: calid2e8b9d12c0: Link UP Nov 12 20:50:28.636358 systemd-networkd[1811]: calid2e8b9d12c0: Gained carrier Nov 12 20:50:28.672097 systemd-networkd[1811]: cali8a787e38ed2: Gained IPv6LL Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.300 [INFO][5154] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0 csi-node-driver- calico-system ced66ac8-f918-4e87-97a3-aafcae5e3866 776 0 2024-11-12 20:49:57 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:548d65b7bf k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-74 csi-node-driver-fgdrq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid2e8b9d12c0 [] []}} ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.300 [INFO][5154] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.447 [INFO][5175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" HandleID="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.485 [INFO][5175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" HandleID="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101d80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-74", "pod":"csi-node-driver-fgdrq", "timestamp":"2024-11-12 20:50:28.447275552 +0000 UTC"}, Hostname:"ip-172-31-17-74", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.485 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.485 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.485 [INFO][5175] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-74' Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.491 [INFO][5175] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.501 [INFO][5175] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.515 [INFO][5175] ipam/ipam.go 489: Trying affinity for 192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.521 [INFO][5175] ipam/ipam.go 155: Attempting to load block cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.529 [INFO][5175] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.128/26 host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.530 [INFO][5175] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.128/26 handle="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.536 [INFO][5175] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4 Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.558 [INFO][5175] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.41.128/26 handle="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.586 [INFO][5175] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.41.134/26] block=192.168.41.128/26 handle="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.586 [INFO][5175] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.134/26] handle="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" host="ip-172-31-17-74" Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.586 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:28.694637 containerd[1977]: 2024-11-12 20:50:28.586 [INFO][5175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.134/26] IPv6=[] ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" HandleID="k8s-pod-network.5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.699777 containerd[1977]: 2024-11-12 20:50:28.622 [INFO][5154] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ced66ac8-f918-4e87-97a3-aafcae5e3866", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"", Pod:"csi-node-driver-fgdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2e8b9d12c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:28.699777 containerd[1977]: 2024-11-12 20:50:28.623 [INFO][5154] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.41.134/32] ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.699777 containerd[1977]: 2024-11-12 20:50:28.623 [INFO][5154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2e8b9d12c0 ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.699777 containerd[1977]: 2024-11-12 20:50:28.628 [INFO][5154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.699777 containerd[1977]: 2024-11-12 20:50:28.631 [INFO][5154] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ced66ac8-f918-4e87-97a3-aafcae5e3866", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4", Pod:"csi-node-driver-fgdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2e8b9d12c0", MAC:"76:52:4f:c8:b7:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:28.699777 containerd[1977]: 2024-11-12 20:50:28.658 [INFO][5154] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4" Namespace="calico-system" Pod="csi-node-driver-fgdrq" WorkloadEndpoint="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:28.790698 containerd[1977]: time="2024-11-12T20:50:28.784690695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:28.790698 containerd[1977]: time="2024-11-12T20:50:28.784767705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:28.790698 containerd[1977]: time="2024-11-12T20:50:28.784784608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:28.790698 containerd[1977]: time="2024-11-12T20:50:28.784917456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:28.889862 containerd[1977]: time="2024-11-12T20:50:28.863684843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:28.889862 containerd[1977]: time="2024-11-12T20:50:28.863796284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:28.889862 containerd[1977]: time="2024-11-12T20:50:28.863834173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:28.889862 containerd[1977]: time="2024-11-12T20:50:28.863995445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:28.960126 systemd[1]: Started cri-containerd-5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4.scope - libcontainer container 5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4. Nov 12 20:50:28.973229 systemd[1]: Started cri-containerd-dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc.scope - libcontainer container dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc. Nov 12 20:50:29.056195 systemd-networkd[1811]: califbf2b2574d6: Gained IPv6LL Nov 12 20:50:29.180461 containerd[1977]: time="2024-11-12T20:50:29.180416356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wl4cj,Uid:68c134a7-a52b-440c-8529-b32527bd916b,Namespace:kube-system,Attempt:1,} returns sandbox id \"dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc\"" Nov 12 20:50:29.207372 kubelet[3348]: I1112 20:50:29.207289 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qjwkb" podStartSLOduration=42.207263756 podStartE2EDuration="42.207263756s" podCreationTimestamp="2024-11-12 20:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:50:29.195764568 +0000 UTC m=+46.091485002" watchObservedRunningTime="2024-11-12 20:50:29.207263756 +0000 UTC m=+46.102984188" Nov 12 20:50:29.232857 containerd[1977]: time="2024-11-12T20:50:29.232801322Z" level=info msg="CreateContainer within sandbox \"dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:50:29.239842 containerd[1977]: time="2024-11-12T20:50:29.239801111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgdrq,Uid:ced66ac8-f918-4e87-97a3-aafcae5e3866,Namespace:calico-system,Attempt:1,} returns sandbox id \"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4\"" Nov 12 20:50:29.331982 containerd[1977]: time="2024-11-12T20:50:29.330562504Z" level=info msg="CreateContainer within sandbox \"dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca91ee4c0fba1f7efd86fbc772af3d552fc21aafc0ec26b90a948c5cc62c7d14\"" Nov 12 20:50:29.334176 containerd[1977]: time="2024-11-12T20:50:29.334091866Z" level=info msg="StartContainer for \"ca91ee4c0fba1f7efd86fbc772af3d552fc21aafc0ec26b90a948c5cc62c7d14\"" Nov 12 20:50:29.484258 systemd[1]: Started cri-containerd-ca91ee4c0fba1f7efd86fbc772af3d552fc21aafc0ec26b90a948c5cc62c7d14.scope - libcontainer container ca91ee4c0fba1f7efd86fbc772af3d552fc21aafc0ec26b90a948c5cc62c7d14. Nov 12 20:50:29.614600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727898213.mount: Deactivated successfully. Nov 12 20:50:29.759808 containerd[1977]: time="2024-11-12T20:50:29.759510257Z" level=info msg="StartContainer for \"ca91ee4c0fba1f7efd86fbc772af3d552fc21aafc0ec26b90a948c5cc62c7d14\" returns successfully" Nov 12 20:50:30.081373 systemd-networkd[1811]: cali80b4b80d7e8: Gained IPv6LL Nov 12 20:50:30.258790 kubelet[3348]: I1112 20:50:30.258510 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wl4cj" podStartSLOduration=43.258486046 podStartE2EDuration="43.258486046s" podCreationTimestamp="2024-11-12 20:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:50:30.20409061 +0000 UTC m=+47.099811041" watchObservedRunningTime="2024-11-12 20:50:30.258486046 +0000 UTC m=+47.154206478" Nov 12 20:50:30.295802 containerd[1977]: time="2024-11-12T20:50:30.295747309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:30.298117 containerd[1977]: time="2024-11-12T20:50:30.298058198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:50:30.299435 containerd[1977]: time="2024-11-12T20:50:30.299060139Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:30.304184 containerd[1977]: time="2024-11-12T20:50:30.304132988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:30.306386 containerd[1977]: time="2024-11-12T20:50:30.306324325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 5.730147433s" Nov 12 20:50:30.306793 containerd[1977]: time="2024-11-12T20:50:30.306588034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:50:30.308589 containerd[1977]: time="2024-11-12T20:50:30.308365239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:50:30.369250 containerd[1977]: time="2024-11-12T20:50:30.368784123Z" level=info msg="CreateContainer within sandbox \"e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:50:30.403011 containerd[1977]: time="2024-11-12T20:50:30.402839595Z" level=info msg="CreateContainer within sandbox \"e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd\"" Nov 12 20:50:30.404199 containerd[1977]: time="2024-11-12T20:50:30.403654542Z" level=info msg="StartContainer for \"b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd\"" Nov 12 20:50:30.478173 systemd[1]: Started cri-containerd-b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd.scope - libcontainer container b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd. Nov 12 20:50:30.528498 systemd-networkd[1811]: calid2e8b9d12c0: Gained IPv6LL Nov 12 20:50:30.627666 containerd[1977]: time="2024-11-12T20:50:30.627239706Z" level=info msg="StartContainer for \"b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd\" returns successfully" Nov 12 20:50:31.254974 kubelet[3348]: I1112 20:50:31.253411 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76ddd8f86f-mvf9r" podStartSLOduration=27.520432626 podStartE2EDuration="33.253392443s" podCreationTimestamp="2024-11-12 20:49:58 +0000 UTC" firstStartedPulling="2024-11-12 20:50:24.575197822 +0000 UTC m=+41.470918234" lastFinishedPulling="2024-11-12 20:50:30.308157627 +0000 UTC m=+47.203878051" observedRunningTime="2024-11-12 20:50:31.253047944 +0000 UTC m=+48.148768375" watchObservedRunningTime="2024-11-12 20:50:31.253392443 +0000 UTC m=+48.149112874" Nov 12 20:50:31.330239 systemd[1]: run-containerd-runc-k8s.io-b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd-runc.vzm0JC.mount: Deactivated successfully. Nov 12 20:50:31.777980 systemd[1]: run-containerd-runc-k8s.io-547b8881e4fc0ab50fa6cdbbee2e770c882790889ff8bff6f63962da0d65efea-runc.ej3ndL.mount: Deactivated successfully. Nov 12 20:50:33.367939 ntpd[1941]: Listen normally on 6 vxlan.calico 192.168.41.128:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 6 vxlan.calico 192.168.41.128:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 7 calif45bb6c2c37 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 8 vxlan.calico [fe80::6411:66ff:fe5c:4f44%5]:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 9 cali7234e6ca0ed [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 10 cali8a787e38ed2 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 11 califbf2b2574d6 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 12 cali80b4b80d7e8 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:50:33.370821 ntpd[1941]: 12 Nov 20:50:33 ntpd[1941]: Listen normally on 13 calid2e8b9d12c0 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:50:33.368033 ntpd[1941]: Listen normally on 7 calif45bb6c2c37 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 12 20:50:33.368089 ntpd[1941]: Listen normally on 8 vxlan.calico [fe80::6411:66ff:fe5c:4f44%5]:123 Nov 12 20:50:33.368127 ntpd[1941]: Listen normally on 9 cali7234e6ca0ed [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 20:50:33.368162 ntpd[1941]: Listen normally on 10 cali8a787e38ed2 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:50:33.368205 ntpd[1941]: Listen normally on 11 califbf2b2574d6 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:50:33.368242 ntpd[1941]: Listen normally on 12 cali80b4b80d7e8 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:50:33.368280 ntpd[1941]: Listen normally on 13 calid2e8b9d12c0 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:50:34.446493 containerd[1977]: time="2024-11-12T20:50:34.446298842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.448272 containerd[1977]: time="2024-11-12T20:50:34.448186999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:50:34.452251 containerd[1977]: time="2024-11-12T20:50:34.452200605Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.457747 containerd[1977]: time="2024-11-12T20:50:34.457697930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.459024 containerd[1977]: time="2024-11-12T20:50:34.458983839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 4.150577125s" Nov 12 20:50:34.459135 containerd[1977]: time="2024-11-12T20:50:34.459027656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:50:34.462087 containerd[1977]: time="2024-11-12T20:50:34.461753763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:50:34.464397 containerd[1977]: time="2024-11-12T20:50:34.463560817Z" level=info msg="CreateContainer within sandbox \"fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:50:34.491709 containerd[1977]: time="2024-11-12T20:50:34.491639705Z" level=info msg="CreateContainer within sandbox \"fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9bf16329e2e96cd2360bfdf32b0e69c840568df3b5b0c523946e40620c488c37\"" Nov 12 20:50:34.493136 containerd[1977]: time="2024-11-12T20:50:34.493098348Z" level=info msg="StartContainer for \"9bf16329e2e96cd2360bfdf32b0e69c840568df3b5b0c523946e40620c488c37\"" Nov 12 20:50:34.584092 systemd[1]: Started cri-containerd-9bf16329e2e96cd2360bfdf32b0e69c840568df3b5b0c523946e40620c488c37.scope - libcontainer container 9bf16329e2e96cd2360bfdf32b0e69c840568df3b5b0c523946e40620c488c37. Nov 12 20:50:34.686866 containerd[1977]: time="2024-11-12T20:50:34.686760207Z" level=info msg="StartContainer for \"9bf16329e2e96cd2360bfdf32b0e69c840568df3b5b0c523946e40620c488c37\" returns successfully" Nov 12 20:50:34.886906 containerd[1977]: time="2024-11-12T20:50:34.886777913Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.924553 containerd[1977]: time="2024-11-12T20:50:34.924486001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:50:34.928086 containerd[1977]: time="2024-11-12T20:50:34.927404563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 465.526952ms" Nov 12 20:50:34.928086 containerd[1977]: time="2024-11-12T20:50:34.927452574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:50:34.930966 containerd[1977]: time="2024-11-12T20:50:34.930244202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:50:34.949965 containerd[1977]: time="2024-11-12T20:50:34.948611907Z" level=info msg="CreateContainer within sandbox \"a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:50:34.977675 containerd[1977]: time="2024-11-12T20:50:34.977628175Z" level=info msg="CreateContainer within sandbox \"a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c54a228db389c37bb0fe21890a2c0180dbaee8c2e3947a15376216b8ac1f40ff\"" Nov 12 20:50:34.981310 containerd[1977]: time="2024-11-12T20:50:34.981090695Z" level=info msg="StartContainer for \"c54a228db389c37bb0fe21890a2c0180dbaee8c2e3947a15376216b8ac1f40ff\"" Nov 12 20:50:35.072100 systemd[1]: Started cri-containerd-c54a228db389c37bb0fe21890a2c0180dbaee8c2e3947a15376216b8ac1f40ff.scope - libcontainer container c54a228db389c37bb0fe21890a2c0180dbaee8c2e3947a15376216b8ac1f40ff. Nov 12 20:50:35.250473 containerd[1977]: time="2024-11-12T20:50:35.249718231Z" level=info msg="StartContainer for \"c54a228db389c37bb0fe21890a2c0180dbaee8c2e3947a15376216b8ac1f40ff\" returns successfully" Nov 12 20:50:35.713281 systemd[1]: Started sshd@7-172.31.17.74:22-139.178.89.65:55120.service - OpenSSH per-connection server daemon (139.178.89.65:55120). Nov 12 20:50:35.982420 sshd[5528]: Accepted publickey for core from 139.178.89.65 port 55120 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:35.987501 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:35.998440 systemd-logind[1946]: New session 8 of user core. Nov 12 20:50:36.006199 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:50:36.298861 kubelet[3348]: I1112 20:50:36.298800 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-zplh5" podStartSLOduration=31.048708377 podStartE2EDuration="39.298765182s" podCreationTimestamp="2024-11-12 20:49:57 +0000 UTC" firstStartedPulling="2024-11-12 20:50:26.211089222 +0000 UTC m=+43.106809640" lastFinishedPulling="2024-11-12 20:50:34.461146017 +0000 UTC m=+51.356866445" observedRunningTime="2024-11-12 20:50:35.28111046 +0000 UTC m=+52.176830903" watchObservedRunningTime="2024-11-12 20:50:36.298765182 +0000 UTC m=+53.194485609" Nov 12 20:50:36.884294 containerd[1977]: time="2024-11-12T20:50:36.882325877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:36.885507 containerd[1977]: time="2024-11-12T20:50:36.885451687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:50:36.888848 containerd[1977]: time="2024-11-12T20:50:36.887812918Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:36.896868 containerd[1977]: time="2024-11-12T20:50:36.896820280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:36.899001 containerd[1977]: time="2024-11-12T20:50:36.898954546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.96867165s" Nov 12 20:50:36.900735 containerd[1977]: time="2024-11-12T20:50:36.900695613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:50:36.908802 containerd[1977]: time="2024-11-12T20:50:36.908759608Z" level=info msg="CreateContainer within sandbox \"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:50:37.029832 containerd[1977]: time="2024-11-12T20:50:37.029783876Z" level=info msg="CreateContainer within sandbox \"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8fecbeb768543d84467d8cef1af50dbad8f4e16b3a31a2c685682818571d161a\"" Nov 12 20:50:37.030980 containerd[1977]: time="2024-11-12T20:50:37.030817691Z" level=info msg="StartContainer for \"8fecbeb768543d84467d8cef1af50dbad8f4e16b3a31a2c685682818571d161a\"" Nov 12 20:50:37.155684 systemd[1]: run-containerd-runc-k8s.io-8fecbeb768543d84467d8cef1af50dbad8f4e16b3a31a2c685682818571d161a-runc.v8kytF.mount: Deactivated successfully. Nov 12 20:50:37.170203 systemd[1]: Started cri-containerd-8fecbeb768543d84467d8cef1af50dbad8f4e16b3a31a2c685682818571d161a.scope - libcontainer container 8fecbeb768543d84467d8cef1af50dbad8f4e16b3a31a2c685682818571d161a. Nov 12 20:50:37.318146 kubelet[3348]: I1112 20:50:37.318103 3348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:50:37.353035 sshd[5528]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:37.367051 systemd[1]: sshd@7-172.31.17.74:22-139.178.89.65:55120.service: Deactivated successfully. Nov 12 20:50:37.377526 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:50:37.383871 systemd-logind[1946]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:50:37.389268 systemd-logind[1946]: Removed session 8. Nov 12 20:50:37.404141 containerd[1977]: time="2024-11-12T20:50:37.404031152Z" level=info msg="StartContainer for \"8fecbeb768543d84467d8cef1af50dbad8f4e16b3a31a2c685682818571d161a\" returns successfully" Nov 12 20:50:37.407196 containerd[1977]: time="2024-11-12T20:50:37.407058499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:50:38.794607 kubelet[3348]: I1112 20:50:38.794514 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76bdbb8dbb-dsnm6" podStartSLOduration=34.798823634 podStartE2EDuration="41.794491203s" podCreationTimestamp="2024-11-12 20:49:57 +0000 UTC" firstStartedPulling="2024-11-12 20:50:27.934035397 +0000 UTC m=+44.829755819" lastFinishedPulling="2024-11-12 20:50:34.92970297 +0000 UTC m=+51.825423388" observedRunningTime="2024-11-12 20:50:36.304717967 +0000 UTC m=+53.200438412" watchObservedRunningTime="2024-11-12 20:50:38.794491203 +0000 UTC m=+55.690211634" Nov 12 20:50:39.386553 containerd[1977]: time="2024-11-12T20:50:39.386485176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:39.389748 containerd[1977]: time="2024-11-12T20:50:39.389682350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:50:39.400831 containerd[1977]: time="2024-11-12T20:50:39.400784743Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:39.415240 containerd[1977]: time="2024-11-12T20:50:39.415190816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:39.417086 containerd[1977]: time="2024-11-12T20:50:39.417038041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.009613433s" Nov 12 20:50:39.417223 containerd[1977]: time="2024-11-12T20:50:39.417092379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:50:39.420867 containerd[1977]: time="2024-11-12T20:50:39.420825083Z" level=info msg="CreateContainer within sandbox \"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:50:39.454060 containerd[1977]: time="2024-11-12T20:50:39.454012797Z" level=info msg="CreateContainer within sandbox \"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"046063af7a7564c3d82cc03824dc69f06d485e8faeb650334b9e7f7bf8042b5f\"" Nov 12 20:50:39.455946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105909097.mount: Deactivated successfully. Nov 12 20:50:39.461383 containerd[1977]: time="2024-11-12T20:50:39.456182713Z" level=info msg="StartContainer for \"046063af7a7564c3d82cc03824dc69f06d485e8faeb650334b9e7f7bf8042b5f\"" Nov 12 20:50:39.533130 systemd[1]: Started cri-containerd-046063af7a7564c3d82cc03824dc69f06d485e8faeb650334b9e7f7bf8042b5f.scope - libcontainer container 046063af7a7564c3d82cc03824dc69f06d485e8faeb650334b9e7f7bf8042b5f. Nov 12 20:50:39.595419 containerd[1977]: time="2024-11-12T20:50:39.595279729Z" level=info msg="StartContainer for \"046063af7a7564c3d82cc03824dc69f06d485e8faeb650334b9e7f7bf8042b5f\" returns successfully" Nov 12 20:50:41.164923 kubelet[3348]: I1112 20:50:41.150281 3348 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:50:41.182089 kubelet[3348]: I1112 20:50:41.182041 3348 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:50:42.451437 systemd[1]: Started sshd@8-172.31.17.74:22-139.178.89.65:40828.service - OpenSSH per-connection server daemon (139.178.89.65:40828). Nov 12 20:50:42.696466 sshd[5630]: Accepted publickey for core from 139.178.89.65 port 40828 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:42.698407 sshd[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:42.709994 systemd-logind[1946]: New session 9 of user core. Nov 12 20:50:42.715764 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:50:43.295288 sshd[5630]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:43.324507 systemd[1]: sshd@8-172.31.17.74:22-139.178.89.65:40828.service: Deactivated successfully. Nov 12 20:50:43.369645 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:50:43.376073 systemd-logind[1946]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:50:43.381465 systemd-logind[1946]: Removed session 9. Nov 12 20:50:43.484472 containerd[1977]: time="2024-11-12T20:50:43.484180398Z" level=info msg="StopPodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\"" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.261 [WARNING][5658] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5024c148-6455-4352-9054-dbb7e7eda5a0", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521", Pod:"calico-apiserver-76bdbb8dbb-zplh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7234e6ca0ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.263 [INFO][5658] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.264 [INFO][5658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" iface="eth0" netns="" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.264 [INFO][5658] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.264 [INFO][5658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.407 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.407 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.407 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.428 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.429 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.432 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:44.437929 containerd[1977]: 2024-11-12 20:50:44.435 [INFO][5658] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.437929 containerd[1977]: time="2024-11-12T20:50:44.437831975Z" level=info msg="TearDown network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" successfully" Nov 12 20:50:44.437929 containerd[1977]: time="2024-11-12T20:50:44.437861174Z" level=info msg="StopPodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" returns successfully" Nov 12 20:50:44.482911 containerd[1977]: time="2024-11-12T20:50:44.482800623Z" level=info msg="RemovePodSandbox for \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\"" Nov 12 20:50:44.487828 containerd[1977]: time="2024-11-12T20:50:44.487695051Z" level=info msg="Forcibly stopping sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\"" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.555 [WARNING][5684] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5024c148-6455-4352-9054-dbb7e7eda5a0", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"fc939d20a145a1156db8c752d85b395f53f58d72fb0dce0f4393773c03b1c521", Pod:"calico-apiserver-76bdbb8dbb-zplh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7234e6ca0ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.555 [INFO][5684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.555 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" iface="eth0" netns="" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.555 [INFO][5684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.555 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.615 [INFO][5691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.615 [INFO][5691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.615 [INFO][5691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.637 [WARNING][5691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.637 [INFO][5691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" HandleID="k8s-pod-network.0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--zplh5-eth0" Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.639 [INFO][5691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:44.646282 containerd[1977]: 2024-11-12 20:50:44.643 [INFO][5684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb" Nov 12 20:50:44.647268 containerd[1977]: time="2024-11-12T20:50:44.646420367Z" level=info msg="TearDown network for sandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" successfully" Nov 12 20:50:44.660095 containerd[1977]: time="2024-11-12T20:50:44.660042420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:50:44.669833 containerd[1977]: time="2024-11-12T20:50:44.669770328Z" level=info msg="RemovePodSandbox \"0c0afe59b6e432e303d055e4e2e3168ae65337a3bae36b1b6fb19b25e7b88fcb\" returns successfully" Nov 12 20:50:44.678061 containerd[1977]: time="2024-11-12T20:50:44.678004797Z" level=info msg="StopPodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\"" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.751 [WARNING][5709] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecb9989-c942-4a9c-aaee-09d0b8038260", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9", Pod:"calico-apiserver-76bdbb8dbb-dsnm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbf2b2574d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.751 [INFO][5709] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.751 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" iface="eth0" netns="" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.751 [INFO][5709] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.751 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.828 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.828 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.829 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.871 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.871 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.879 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:44.890426 containerd[1977]: 2024-11-12 20:50:44.883 [INFO][5709] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:44.891148 containerd[1977]: time="2024-11-12T20:50:44.890466436Z" level=info msg="TearDown network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" successfully" Nov 12 20:50:44.891148 containerd[1977]: time="2024-11-12T20:50:44.890519088Z" level=info msg="StopPodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" returns successfully" Nov 12 20:50:44.891148 containerd[1977]: time="2024-11-12T20:50:44.891107969Z" level=info msg="RemovePodSandbox for \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\"" Nov 12 20:50:44.891148 containerd[1977]: time="2024-11-12T20:50:44.891142719Z" level=info msg="Forcibly stopping sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\"" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:44.997 [WARNING][5734] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0", GenerateName:"calico-apiserver-76bdbb8dbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecb9989-c942-4a9c-aaee-09d0b8038260", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bdbb8dbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"a9b78090aa2a6f96614bc2ae4fb3b92bf3cb46740d4954e681113cfbd14882e9", Pod:"calico-apiserver-76bdbb8dbb-dsnm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbf2b2574d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:44.998 [INFO][5734] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:44.998 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" iface="eth0" netns="" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:44.998 [INFO][5734] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:44.998 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.027 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.028 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.028 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.039 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.039 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" HandleID="k8s-pod-network.d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Workload="ip--172--31--17--74-k8s-calico--apiserver--76bdbb8dbb--dsnm6-eth0" Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.041 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:45.046979 containerd[1977]: 2024-11-12 20:50:45.044 [INFO][5734] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21" Nov 12 20:50:45.046979 containerd[1977]: time="2024-11-12T20:50:45.046860979Z" level=info msg="TearDown network for sandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" successfully" Nov 12 20:50:45.055447 containerd[1977]: time="2024-11-12T20:50:45.054106612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:50:45.055447 containerd[1977]: time="2024-11-12T20:50:45.054191075Z" level=info msg="RemovePodSandbox \"d6712f3a03a2b62c033c50e14db3d09f23bba8a1701627eec817e42c63ca4b21\" returns successfully" Nov 12 20:50:45.055447 containerd[1977]: time="2024-11-12T20:50:45.054757284Z" level=info msg="StopPodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\"" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.103 [WARNING][5758] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"68c134a7-a52b-440c-8529-b32527bd916b", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc", Pod:"coredns-6f6b679f8f-wl4cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b4b80d7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.104 [INFO][5758] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.104 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" iface="eth0" netns="" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.104 [INFO][5758] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.104 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.153 [INFO][5764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.153 [INFO][5764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.153 [INFO][5764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.165 [WARNING][5764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.165 [INFO][5764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.167 [INFO][5764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:45.172421 containerd[1977]: 2024-11-12 20:50:45.170 [INFO][5758] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.175595 containerd[1977]: time="2024-11-12T20:50:45.173097094Z" level=info msg="TearDown network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" successfully" Nov 12 20:50:45.175595 containerd[1977]: time="2024-11-12T20:50:45.173130538Z" level=info msg="StopPodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" returns successfully" Nov 12 20:50:45.175595 containerd[1977]: time="2024-11-12T20:50:45.175063004Z" level=info msg="RemovePodSandbox for \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\"" Nov 12 20:50:45.175595 containerd[1977]: time="2024-11-12T20:50:45.175098516Z" level=info msg="Forcibly stopping sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\"" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.229 [WARNING][5783] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"68c134a7-a52b-440c-8529-b32527bd916b", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"dce78340383709fb56e59360cae75a912103b9c0ec308260eef03d81fb14a8cc", Pod:"coredns-6f6b679f8f-wl4cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b4b80d7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.229 [INFO][5783] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.229 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" iface="eth0" netns="" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.229 [INFO][5783] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.229 [INFO][5783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.330 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.330 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.330 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.339 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.339 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" HandleID="k8s-pod-network.ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--wl4cj-eth0" Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.344 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:45.350037 containerd[1977]: 2024-11-12 20:50:45.346 [INFO][5783] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f" Nov 12 20:50:45.350784 containerd[1977]: time="2024-11-12T20:50:45.350111018Z" level=info msg="TearDown network for sandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" successfully" Nov 12 20:50:45.356361 containerd[1977]: time="2024-11-12T20:50:45.356098737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:50:45.356361 containerd[1977]: time="2024-11-12T20:50:45.356182391Z" level=info msg="RemovePodSandbox \"ea199b9312fdc0f4fc6cd913b9fc89d8a9bfa222872e5e811e81559d133a3d7f\" returns successfully" Nov 12 20:50:45.357441 containerd[1977]: time="2024-11-12T20:50:45.357408364Z" level=info msg="StopPodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\"" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.568 [WARNING][5813] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0", GenerateName:"calico-kube-controllers-76ddd8f86f-", Namespace:"calico-system", SelfLink:"", UID:"0e54a56b-0402-45a2-887b-cb2004ed0683", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 58, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddd8f86f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b", Pod:"calico-kube-controllers-76ddd8f86f-mvf9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif45bb6c2c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.569 [INFO][5813] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.569 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" iface="eth0" netns="" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.569 [INFO][5813] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.569 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.617 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.617 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.617 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.626 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.627 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.630 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:45.638630 containerd[1977]: 2024-11-12 20:50:45.632 [INFO][5813] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.641393 containerd[1977]: time="2024-11-12T20:50:45.639786642Z" level=info msg="TearDown network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" successfully" Nov 12 20:50:45.641393 containerd[1977]: time="2024-11-12T20:50:45.639904577Z" level=info msg="StopPodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" returns successfully" Nov 12 20:50:45.641393 containerd[1977]: time="2024-11-12T20:50:45.640808925Z" level=info msg="RemovePodSandbox for \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\"" Nov 12 20:50:45.641393 containerd[1977]: time="2024-11-12T20:50:45.640847518Z" level=info msg="Forcibly stopping sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\"" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.695 [WARNING][5839] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0", GenerateName:"calico-kube-controllers-76ddd8f86f-", Namespace:"calico-system", SelfLink:"", UID:"0e54a56b-0402-45a2-887b-cb2004ed0683", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 58, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddd8f86f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"e158ee7a493af125ddbd7e7b38cda8396c55b4578b481e810ac175abd02d580b", Pod:"calico-kube-controllers-76ddd8f86f-mvf9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif45bb6c2c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.696 [INFO][5839] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.696 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" iface="eth0" netns="" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.696 [INFO][5839] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.696 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.745 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.746 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.746 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.753 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.753 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" HandleID="k8s-pod-network.3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Workload="ip--172--31--17--74-k8s-calico--kube--controllers--76ddd8f86f--mvf9r-eth0" Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.755 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:45.759001 containerd[1977]: 2024-11-12 20:50:45.756 [INFO][5839] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3" Nov 12 20:50:45.759749 containerd[1977]: time="2024-11-12T20:50:45.759111976Z" level=info msg="TearDown network for sandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" successfully" Nov 12 20:50:45.763728 containerd[1977]: time="2024-11-12T20:50:45.763682174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:50:45.764278 containerd[1977]: time="2024-11-12T20:50:45.763763757Z" level=info msg="RemovePodSandbox \"3189a8136dd829a038819c0edada288156e3090c375a64251e3c5c664e8b11e3\" returns successfully" Nov 12 20:50:45.764656 containerd[1977]: time="2024-11-12T20:50:45.764621189Z" level=info msg="StopPodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\"" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.847 [WARNING][5863] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ced66ac8-f918-4e87-97a3-aafcae5e3866", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4", Pod:"csi-node-driver-fgdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2e8b9d12c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.848 [INFO][5863] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.848 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" iface="eth0" netns="" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.848 [INFO][5863] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.848 [INFO][5863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.973 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.975 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:45.975 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:46.007 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:46.009 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:46.025 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:46.041503 containerd[1977]: 2024-11-12 20:50:46.031 [INFO][5863] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.053858 containerd[1977]: time="2024-11-12T20:50:46.041552906Z" level=info msg="TearDown network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" successfully" Nov 12 20:50:46.053858 containerd[1977]: time="2024-11-12T20:50:46.041585026Z" level=info msg="StopPodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" returns successfully" Nov 12 20:50:46.053858 containerd[1977]: time="2024-11-12T20:50:46.050290762Z" level=info msg="RemovePodSandbox for \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\"" Nov 12 20:50:46.053858 containerd[1977]: time="2024-11-12T20:50:46.050328472Z" level=info msg="Forcibly stopping sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\"" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.226 [WARNING][5888] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ced66ac8-f918-4e87-97a3-aafcae5e3866", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"5cfba6e8710adeb38d39e47949a875839a0d69010aa50dac8a76dd0b07caf0d4", Pod:"csi-node-driver-fgdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid2e8b9d12c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.227 [INFO][5888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.227 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" iface="eth0" netns="" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.229 [INFO][5888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.229 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.404 [INFO][5894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.405 [INFO][5894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.405 [INFO][5894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.427 [WARNING][5894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.428 [INFO][5894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" HandleID="k8s-pod-network.751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Workload="ip--172--31--17--74-k8s-csi--node--driver--fgdrq-eth0" Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.431 [INFO][5894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:46.438039 containerd[1977]: 2024-11-12 20:50:46.435 [INFO][5888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0" Nov 12 20:50:46.440424 containerd[1977]: time="2024-11-12T20:50:46.439119243Z" level=info msg="TearDown network for sandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" successfully" Nov 12 20:50:46.447643 containerd[1977]: time="2024-11-12T20:50:46.447145009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:50:46.447643 containerd[1977]: time="2024-11-12T20:50:46.447265316Z" level=info msg="RemovePodSandbox \"751548fc215386774a72a867e5501383754b3a0f3be6e702864bfa38de8893c0\" returns successfully" Nov 12 20:50:46.449037 containerd[1977]: time="2024-11-12T20:50:46.449000642Z" level=info msg="StopPodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\"" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.568 [WARNING][5913] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"668ad4dc-2321-4d8c-901f-2cce25a4f11a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168", Pod:"coredns-6f6b679f8f-qjwkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a787e38ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.568 [INFO][5913] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.568 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" iface="eth0" netns="" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.568 [INFO][5913] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.568 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.695 [INFO][5919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.696 [INFO][5919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.696 [INFO][5919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.714 [WARNING][5919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.714 [INFO][5919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.720 [INFO][5919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:46.731675 containerd[1977]: 2024-11-12 20:50:46.726 [INFO][5913] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.731675 containerd[1977]: time="2024-11-12T20:50:46.731639505Z" level=info msg="TearDown network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" successfully" Nov 12 20:50:46.731675 containerd[1977]: time="2024-11-12T20:50:46.731672028Z" level=info msg="StopPodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" returns successfully" Nov 12 20:50:46.734200 containerd[1977]: time="2024-11-12T20:50:46.734169476Z" level=info msg="RemovePodSandbox for \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\"" Nov 12 20:50:46.734251 containerd[1977]: time="2024-11-12T20:50:46.734207606Z" level=info msg="Forcibly stopping sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\"" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.833 [WARNING][5937] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"668ad4dc-2321-4d8c-901f-2cce25a4f11a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 49, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-74", ContainerID:"ac530cc2c37939a4124afae0204a1e8d69a23d7121fe406eae1d2f49b90ee168", Pod:"coredns-6f6b679f8f-qjwkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a787e38ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.834 [INFO][5937] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.834 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" iface="eth0" netns="" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.834 [INFO][5937] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.834 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.911 [INFO][5943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.911 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.911 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.940 [WARNING][5943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.940 [INFO][5943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" HandleID="k8s-pod-network.d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Workload="ip--172--31--17--74-k8s-coredns--6f6b679f8f--qjwkb-eth0" Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.943 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:50:46.949075 containerd[1977]: 2024-11-12 20:50:46.945 [INFO][5937] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46" Nov 12 20:50:46.950977 containerd[1977]: time="2024-11-12T20:50:46.949229075Z" level=info msg="TearDown network for sandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" successfully" Nov 12 20:50:46.955211 containerd[1977]: time="2024-11-12T20:50:46.955054429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:50:46.955492 containerd[1977]: time="2024-11-12T20:50:46.955392799Z" level=info msg="RemovePodSandbox \"d75d068bc4e152bfc57992374a58a435a4f186c251f82488bcecbf2b3d238c46\" returns successfully" Nov 12 20:50:48.339280 systemd[1]: Started sshd@9-172.31.17.74:22-139.178.89.65:55624.service - OpenSSH per-connection server daemon (139.178.89.65:55624). Nov 12 20:50:48.587366 sshd[5950]: Accepted publickey for core from 139.178.89.65 port 55624 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:48.588041 sshd[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:48.598920 systemd-logind[1946]: New session 10 of user core. Nov 12 20:50:48.609122 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:50:49.219045 sshd[5950]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:49.242177 systemd[1]: sshd@9-172.31.17.74:22-139.178.89.65:55624.service: Deactivated successfully. Nov 12 20:50:49.252396 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:50:49.261574 systemd-logind[1946]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:50:49.263234 systemd-logind[1946]: Removed session 10. Nov 12 20:50:52.616582 systemd[1]: run-containerd-runc-k8s.io-b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd-runc.ud1Lcj.mount: Deactivated successfully. Nov 12 20:50:54.252840 systemd[1]: Started sshd@10-172.31.17.74:22-139.178.89.65:55630.service - OpenSSH per-connection server daemon (139.178.89.65:55630). Nov 12 20:50:54.459995 sshd[5988]: Accepted publickey for core from 139.178.89.65 port 55630 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:54.471279 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:54.477448 systemd-logind[1946]: New session 11 of user core. Nov 12 20:50:54.485239 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:50:54.928807 sshd[5988]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:54.937472 systemd[1]: sshd@10-172.31.17.74:22-139.178.89.65:55630.service: Deactivated successfully. Nov 12 20:50:54.956965 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:50:54.973605 systemd-logind[1946]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:50:55.033290 systemd[1]: Started sshd@11-172.31.17.74:22-139.178.89.65:55638.service - OpenSSH per-connection server daemon (139.178.89.65:55638). Nov 12 20:50:55.035937 systemd-logind[1946]: Removed session 11. Nov 12 20:50:55.246090 sshd[6001]: Accepted publickey for core from 139.178.89.65 port 55638 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:55.249953 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:55.261418 systemd-logind[1946]: New session 12 of user core. Nov 12 20:50:55.268174 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:50:55.760860 sshd[6001]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:55.790086 systemd[1]: sshd@11-172.31.17.74:22-139.178.89.65:55638.service: Deactivated successfully. Nov 12 20:50:55.804670 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:50:55.812098 systemd-logind[1946]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:50:55.827246 systemd[1]: Started sshd@12-172.31.17.74:22-139.178.89.65:55642.service - OpenSSH per-connection server daemon (139.178.89.65:55642). Nov 12 20:50:55.836707 systemd-logind[1946]: Removed session 12. Nov 12 20:50:56.029450 sshd[6012]: Accepted publickey for core from 139.178.89.65 port 55642 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:56.035672 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:56.078112 systemd-logind[1946]: New session 13 of user core. Nov 12 20:50:56.102212 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:50:56.407533 sshd[6012]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:56.412754 systemd-logind[1946]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:50:56.414648 systemd[1]: sshd@12-172.31.17.74:22-139.178.89.65:55642.service: Deactivated successfully. Nov 12 20:50:56.417132 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:50:56.418471 systemd-logind[1946]: Removed session 13. Nov 12 20:51:01.475253 systemd[1]: Started sshd@13-172.31.17.74:22-139.178.89.65:51638.service - OpenSSH per-connection server daemon (139.178.89.65:51638). Nov 12 20:51:01.672570 sshd[6029]: Accepted publickey for core from 139.178.89.65 port 51638 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:01.678536 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:01.707171 systemd-logind[1946]: New session 14 of user core. Nov 12 20:51:01.732284 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:51:02.369603 sshd[6029]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:02.424009 systemd[1]: sshd@13-172.31.17.74:22-139.178.89.65:51638.service: Deactivated successfully. Nov 12 20:51:02.424129 systemd-logind[1946]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:51:02.431814 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:51:02.478227 systemd-logind[1946]: Removed session 14. Nov 12 20:51:02.603168 systemd[1]: run-containerd-runc-k8s.io-b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd-runc.MQQCOL.mount: Deactivated successfully. Nov 12 20:51:07.392582 systemd[1]: Started sshd@14-172.31.17.74:22-139.178.89.65:37950.service - OpenSSH per-connection server daemon (139.178.89.65:37950). Nov 12 20:51:07.632496 sshd[6087]: Accepted publickey for core from 139.178.89.65 port 37950 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:07.636877 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:07.646956 systemd-logind[1946]: New session 15 of user core. Nov 12 20:51:07.656333 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:51:08.620145 sshd[6087]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:08.651388 systemd[1]: sshd@14-172.31.17.74:22-139.178.89.65:37950.service: Deactivated successfully. Nov 12 20:51:08.656394 systemd-logind[1946]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:51:08.670029 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:51:08.690655 systemd-logind[1946]: Removed session 15. Nov 12 20:51:13.678095 systemd[1]: Started sshd@15-172.31.17.74:22-139.178.89.65:37952.service - OpenSSH per-connection server daemon (139.178.89.65:37952). Nov 12 20:51:13.918943 sshd[6102]: Accepted publickey for core from 139.178.89.65 port 37952 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:13.927196 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:13.937249 systemd-logind[1946]: New session 16 of user core. Nov 12 20:51:13.945375 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:51:14.386420 sshd[6102]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:14.413622 systemd[1]: sshd@15-172.31.17.74:22-139.178.89.65:37952.service: Deactivated successfully. Nov 12 20:51:14.428730 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:51:14.435987 systemd-logind[1946]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:51:14.439600 systemd-logind[1946]: Removed session 16. Nov 12 20:51:19.439157 systemd[1]: Started sshd@16-172.31.17.74:22-139.178.89.65:39850.service - OpenSSH per-connection server daemon (139.178.89.65:39850). Nov 12 20:51:19.647046 sshd[6117]: Accepted publickey for core from 139.178.89.65 port 39850 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:19.649187 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:19.663789 systemd-logind[1946]: New session 17 of user core. Nov 12 20:51:19.675533 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:51:20.212599 sshd[6117]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:20.223625 systemd[1]: sshd@16-172.31.17.74:22-139.178.89.65:39850.service: Deactivated successfully. Nov 12 20:51:20.228385 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:51:20.238748 systemd-logind[1946]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:51:20.268237 systemd[1]: Started sshd@17-172.31.17.74:22-139.178.89.65:39854.service - OpenSSH per-connection server daemon (139.178.89.65:39854). Nov 12 20:51:20.274704 systemd-logind[1946]: Removed session 17. Nov 12 20:51:20.479137 sshd[6130]: Accepted publickey for core from 139.178.89.65 port 39854 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:20.482250 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:20.493752 systemd-logind[1946]: New session 18 of user core. Nov 12 20:51:20.501142 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:51:21.371385 sshd[6130]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:21.382308 systemd-logind[1946]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:51:21.385255 systemd[1]: sshd@17-172.31.17.74:22-139.178.89.65:39854.service: Deactivated successfully. Nov 12 20:51:21.396518 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:51:21.432941 systemd[1]: Started sshd@18-172.31.17.74:22-139.178.89.65:39856.service - OpenSSH per-connection server daemon (139.178.89.65:39856). Nov 12 20:51:21.435051 systemd-logind[1946]: Removed session 18. Nov 12 20:51:21.630431 sshd[6141]: Accepted publickey for core from 139.178.89.65 port 39856 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:21.632274 sshd[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:21.650863 systemd-logind[1946]: New session 19 of user core. Nov 12 20:51:21.658152 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:51:22.595747 systemd[1]: run-containerd-runc-k8s.io-b8fbefa6d64a5f750c98940005eed32270f26b4f1133f84315a6759e6668bcfd-runc.B03l6D.mount: Deactivated successfully. Nov 12 20:51:25.650663 sshd[6141]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:25.661538 systemd[1]: sshd@18-172.31.17.74:22-139.178.89.65:39856.service: Deactivated successfully. Nov 12 20:51:25.665282 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:51:25.666730 systemd-logind[1946]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:51:25.668685 systemd-logind[1946]: Removed session 19. Nov 12 20:51:25.694516 systemd[1]: Started sshd@19-172.31.17.74:22-139.178.89.65:39862.service - OpenSSH per-connection server daemon (139.178.89.65:39862). Nov 12 20:51:25.926300 sshd[6183]: Accepted publickey for core from 139.178.89.65 port 39862 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:25.929836 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:25.940812 systemd-logind[1946]: New session 20 of user core. Nov 12 20:51:25.951170 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:51:27.771247 sshd[6183]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:27.812456 systemd[1]: sshd@19-172.31.17.74:22-139.178.89.65:39862.service: Deactivated successfully. Nov 12 20:51:27.827302 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:51:27.829807 systemd-logind[1946]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:51:27.850961 systemd[1]: Started sshd@20-172.31.17.74:22-139.178.89.65:51334.service - OpenSSH per-connection server daemon (139.178.89.65:51334). Nov 12 20:51:27.861064 systemd-logind[1946]: Removed session 20. Nov 12 20:51:28.082762 sshd[6195]: Accepted publickey for core from 139.178.89.65 port 51334 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:28.084706 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:28.090952 systemd-logind[1946]: New session 21 of user core. Nov 12 20:51:28.096189 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:51:28.451083 sshd[6195]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:28.461221 systemd-logind[1946]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:51:28.462508 systemd[1]: sshd@20-172.31.17.74:22-139.178.89.65:51334.service: Deactivated successfully. Nov 12 20:51:28.467118 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:51:28.469511 systemd-logind[1946]: Removed session 21. Nov 12 20:51:33.320402 update_engine[1950]: I20241112 20:51:33.320323 1950 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 12 20:51:33.320402 update_engine[1950]: I20241112 20:51:33.320398 1950 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 12 20:51:33.329029 update_engine[1950]: I20241112 20:51:33.328977 1950 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 12 20:51:33.330591 update_engine[1950]: I20241112 20:51:33.330232 1950 omaha_request_params.cc:62] Current group set to stable Nov 12 20:51:33.330742 update_engine[1950]: I20241112 20:51:33.330719 1950 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 12 20:51:33.330819 update_engine[1950]: I20241112 20:51:33.330804 1950 update_attempter.cc:643] Scheduling an action processor start. Nov 12 20:51:33.331947 update_engine[1950]: I20241112 20:51:33.330963 1950 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:51:33.331947 update_engine[1950]: I20241112 20:51:33.331023 1950 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 12 20:51:33.331947 update_engine[1950]: I20241112 20:51:33.331104 1950 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:51:33.331947 update_engine[1950]: I20241112 20:51:33.331113 1950 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Nov 12 20:51:33.331947 update_engine[1950]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Nov 12 20:51:33.331947 update_engine[1950]: <os version="Chateau" platform="CoreOS" sp="4081.2.0_x86_64"></os> Nov 12 20:51:33.331947 update_engine[1950]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.2.0" track="stable" bootid="{5226ca08-3409-4914-bd07-22a3b3c1f57d}" oem="ami" oemversion="3.2.985.0-r1" alephversion="4081.2.0" machineid="ec2ad6d4c289e85ac0433912c3545cd7" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Nov 12 20:51:33.331947 update_engine[1950]: <ping active="1"></ping> Nov 12 20:51:33.331947 update_engine[1950]: <updatecheck></updatecheck> Nov 12 20:51:33.331947 update_engine[1950]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Nov 12 20:51:33.331947 update_engine[1950]: </app> Nov 12 20:51:33.331947 update_engine[1950]: </request> Nov 12 20:51:33.331947 update_engine[1950]: I20241112 20:51:33.331121 1950 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:51:33.393549 update_engine[1950]: I20241112 20:51:33.391134 1950 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:51:33.396169 update_engine[1950]: I20241112 20:51:33.395608 1950 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:51:33.397702 locksmithd[1996]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 12 20:51:33.440051 update_engine[1950]: E20241112 20:51:33.439651 1950 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:51:33.440398 update_engine[1950]: I20241112 20:51:33.440115 1950 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 12 20:51:33.488450 systemd[1]: Started sshd@21-172.31.17.74:22-139.178.89.65:51350.service - OpenSSH per-connection server daemon (139.178.89.65:51350). Nov 12 20:51:33.816005 sshd[6230]: Accepted publickey for core from 139.178.89.65 port 51350 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:33.820397 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:33.833477 systemd-logind[1946]: New session 22 of user core. Nov 12 20:51:33.839138 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:51:34.511371 sshd[6230]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:34.571380 systemd[1]: sshd@21-172.31.17.74:22-139.178.89.65:51350.service: Deactivated successfully. Nov 12 20:51:34.576590 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:51:34.599339 systemd-logind[1946]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:51:34.606615 systemd-logind[1946]: Removed session 22. Nov 12 20:51:39.544320 systemd[1]: Started sshd@22-172.31.17.74:22-139.178.89.65:54048.service - OpenSSH per-connection server daemon (139.178.89.65:54048). Nov 12 20:51:39.765992 sshd[6246]: Accepted publickey for core from 139.178.89.65 port 54048 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:39.769287 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:39.776502 systemd-logind[1946]: New session 23 of user core. Nov 12 20:51:39.786515 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:51:40.532209 sshd[6246]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:40.566742 systemd[1]: sshd@22-172.31.17.74:22-139.178.89.65:54048.service: Deactivated successfully. Nov 12 20:51:40.601662 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:51:40.605456 systemd-logind[1946]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:51:40.609494 systemd-logind[1946]: Removed session 23. Nov 12 20:51:43.224108 update_engine[1950]: I20241112 20:51:43.223939 1950 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:51:43.225086 update_engine[1950]: I20241112 20:51:43.224363 1950 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:51:43.225086 update_engine[1950]: I20241112 20:51:43.224632 1950 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:51:43.225928 update_engine[1950]: E20241112 20:51:43.225890 1950 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:51:43.226117 update_engine[1950]: I20241112 20:51:43.226089 1950 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 12 20:51:45.570369 systemd[1]: Started sshd@23-172.31.17.74:22-139.178.89.65:54054.service - OpenSSH per-connection server daemon (139.178.89.65:54054). Nov 12 20:51:45.834086 sshd[6267]: Accepted publickey for core from 139.178.89.65 port 54054 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:45.837276 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:45.845112 systemd-logind[1946]: New session 24 of user core. Nov 12 20:51:45.851121 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:51:46.124422 sshd[6267]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:46.135425 systemd[1]: sshd@23-172.31.17.74:22-139.178.89.65:54054.service: Deactivated successfully. Nov 12 20:51:46.139838 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:51:46.142466 systemd-logind[1946]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:51:46.144461 systemd-logind[1946]: Removed session 24. Nov 12 20:51:51.165514 systemd[1]: Started sshd@24-172.31.17.74:22-139.178.89.65:43120.service - OpenSSH per-connection server daemon (139.178.89.65:43120). Nov 12 20:51:51.348026 sshd[6284]: Accepted publickey for core from 139.178.89.65 port 43120 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:51.349419 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:51.358539 systemd-logind[1946]: New session 25 of user core. Nov 12 20:51:51.364105 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:51:51.589157 sshd[6284]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:51.592097 systemd[1]: sshd@24-172.31.17.74:22-139.178.89.65:43120.service: Deactivated successfully. Nov 12 20:51:51.594482 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:51:51.595980 systemd-logind[1946]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:51:51.597440 systemd-logind[1946]: Removed session 25. Nov 12 20:51:53.222996 update_engine[1950]: I20241112 20:51:53.222918 1950 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:51:53.223436 update_engine[1950]: I20241112 20:51:53.223206 1950 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:51:53.223500 update_engine[1950]: I20241112 20:51:53.223474 1950 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:51:53.223986 update_engine[1950]: E20241112 20:51:53.223953 1950 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:51:53.224082 update_engine[1950]: I20241112 20:51:53.224018 1950 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 12 20:51:56.628254 systemd[1]: Started sshd@25-172.31.17.74:22-139.178.89.65:43128.service - OpenSSH per-connection server daemon (139.178.89.65:43128). Nov 12 20:51:56.813387 sshd[6321]: Accepted publickey for core from 139.178.89.65 port 43128 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:56.815377 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:56.822153 systemd-logind[1946]: New session 26 of user core. Nov 12 20:51:56.830616 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:51:57.111698 sshd[6321]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:57.122628 systemd-logind[1946]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:51:57.125599 systemd[1]: sshd@25-172.31.17.74:22-139.178.89.65:43128.service: Deactivated successfully. Nov 12 20:51:57.133867 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:51:57.139104 systemd-logind[1946]: Removed session 26. Nov 12 20:52:02.172623 systemd[1]: Started sshd@26-172.31.17.74:22-139.178.89.65:58456.service - OpenSSH per-connection server daemon (139.178.89.65:58456). Nov 12 20:52:02.451724 sshd[6357]: Accepted publickey for core from 139.178.89.65 port 58456 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:02.465364 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:02.518117 systemd-logind[1946]: New session 27 of user core. Nov 12 20:52:02.535468 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:52:03.222948 update_engine[1950]: I20241112 20:52:03.222293 1950 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:52:03.222948 update_engine[1950]: I20241112 20:52:03.222558 1950 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:52:03.222948 update_engine[1950]: I20241112 20:52:03.222839 1950 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:52:03.223984 update_engine[1950]: E20241112 20:52:03.223613 1950 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223673 1950 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223687 1950 omaha_request_action.cc:617] Omaha request response: Nov 12 20:52:03.223984 update_engine[1950]: E20241112 20:52:03.223786 1950 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223816 1950 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223827 1950 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223835 1950 update_attempter.cc:306] Processing Done. Nov 12 20:52:03.223984 update_engine[1950]: E20241112 20:52:03.223855 1950 update_attempter.cc:619] Update failed. Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223863 1950 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223870 1950 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 12 20:52:03.223984 update_engine[1950]: I20241112 20:52:03.223896 1950 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 12 20:52:03.226949 update_engine[1950]: I20241112 20:52:03.224388 1950 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:52:03.226949 update_engine[1950]: I20241112 20:52:03.224433 1950 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:52:03.226949 update_engine[1950]: I20241112 20:52:03.224443 1950 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Nov 12 20:52:03.226949 update_engine[1950]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Nov 12 20:52:03.226949 update_engine[1950]: <os version="Chateau" platform="CoreOS" sp="4081.2.0_x86_64"></os> Nov 12 20:52:03.226949 update_engine[1950]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.2.0" track="stable" bootid="{5226ca08-3409-4914-bd07-22a3b3c1f57d}" oem="ami" oemversion="3.2.985.0-r1" alephversion="4081.2.0" machineid="ec2ad6d4c289e85ac0433912c3545cd7" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Nov 12 20:52:03.226949 update_engine[1950]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Nov 12 20:52:03.226949 update_engine[1950]: </app> Nov 12 20:52:03.226949 update_engine[1950]: </request> Nov 12 20:52:03.226949 update_engine[1950]: I20241112 20:52:03.224452 1950 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:52:03.226949 update_engine[1950]: I20241112 20:52:03.225302 1950 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:52:03.226949 update_engine[1950]: I20241112 20:52:03.225528 1950 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:52:03.227469 update_engine[1950]: E20241112 20:52:03.227056 1950 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:52:03.227469 update_engine[1950]: I20241112 20:52:03.227115 1950 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 20:52:03.227469 update_engine[1950]: I20241112 20:52:03.227125 1950 omaha_request_action.cc:617] Omaha request response: Nov 12 20:52:03.227469 update_engine[1950]: I20241112 20:52:03.227136 1950 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:52:03.227469 update_engine[1950]: I20241112 20:52:03.227144 1950 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:52:03.227469 update_engine[1950]: I20241112 20:52:03.227151 1950 update_attempter.cc:306] Processing Done. Nov 12 20:52:03.227469 update_engine[1950]: I20241112 20:52:03.227160 1950 update_attempter.cc:310] Error event sent. Nov 12 20:52:03.230951 locksmithd[1996]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 12 20:52:03.235064 update_engine[1950]: I20241112 20:52:03.234995 1950 update_check_scheduler.cc:74] Next update check in 44m11s Nov 12 20:52:03.238701 locksmithd[1996]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 12 20:52:03.246241 sshd[6357]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:03.261966 systemd[1]: sshd@26-172.31.17.74:22-139.178.89.65:58456.service: Deactivated successfully. Nov 12 20:52:03.276372 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:52:03.281753 systemd-logind[1946]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:52:03.283197 systemd-logind[1946]: Removed session 27. Nov 12 20:52:08.282507 systemd[1]: Started sshd@27-172.31.17.74:22-139.178.89.65:59796.service - OpenSSH per-connection server daemon (139.178.89.65:59796). Nov 12 20:52:08.487726 sshd[6400]: Accepted publickey for core from 139.178.89.65 port 59796 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:08.488636 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:08.514776 systemd-logind[1946]: New session 28 of user core. Nov 12 20:52:08.516047 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:52:08.816138 sshd[6400]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:08.821450 systemd-logind[1946]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:52:08.822744 systemd[1]: sshd@27-172.31.17.74:22-139.178.89.65:59796.service: Deactivated successfully. Nov 12 20:52:08.826569 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:52:08.828193 systemd-logind[1946]: Removed session 28. Nov 12 20:52:22.260325 systemd[1]: cri-containerd-970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4.scope: Deactivated successfully. Nov 12 20:52:22.261219 systemd[1]: cri-containerd-970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4.scope: Consumed 3.885s CPU time, 34.5M memory peak, 0B memory swap peak. Nov 12 20:52:22.546932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4-rootfs.mount: Deactivated successfully. Nov 12 20:52:22.631435 containerd[1977]: time="2024-11-12T20:52:22.592956555Z" level=info msg="shim disconnected" id=970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4 namespace=k8s.io Nov 12 20:52:22.652238 containerd[1977]: time="2024-11-12T20:52:22.652174605Z" level=warning msg="cleaning up after shim disconnected" id=970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4 namespace=k8s.io Nov 12 20:52:22.652463 containerd[1977]: time="2024-11-12T20:52:22.652440925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:52:22.910195 kubelet[3348]: I1112 20:52:22.910003 3348 scope.go:117] "RemoveContainer" containerID="970a8a58c86fa8438954786291dc5be2b61fdb8616df5b6cd825ba5203cd64d4" Nov 12 20:52:22.948943 containerd[1977]: time="2024-11-12T20:52:22.948871704Z" level=info msg="CreateContainer within sandbox \"2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 12 20:52:23.072200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029862705.mount: Deactivated successfully. Nov 12 20:52:23.093426 containerd[1977]: time="2024-11-12T20:52:23.093376195Z" level=info msg="CreateContainer within sandbox \"2b38d9223b0ee740da4794c14a718e7b36db7704d7f20bc4af2255d8241f3dba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0d27e93449c27a887b1c0822ca7a31ef2b1a58f4a08b1d1f09238c8822ff69b3\"" Nov 12 20:52:23.094768 containerd[1977]: time="2024-11-12T20:52:23.094729582Z" level=info msg="StartContainer for \"0d27e93449c27a887b1c0822ca7a31ef2b1a58f4a08b1d1f09238c8822ff69b3\"" Nov 12 20:52:23.223117 systemd[1]: Started cri-containerd-0d27e93449c27a887b1c0822ca7a31ef2b1a58f4a08b1d1f09238c8822ff69b3.scope - libcontainer container 0d27e93449c27a887b1c0822ca7a31ef2b1a58f4a08b1d1f09238c8822ff69b3. Nov 12 20:52:23.312075 containerd[1977]: time="2024-11-12T20:52:23.307234059Z" level=info msg="StartContainer for \"0d27e93449c27a887b1c0822ca7a31ef2b1a58f4a08b1d1f09238c8822ff69b3\" returns successfully" Nov 12 20:52:23.914110 systemd[1]: cri-containerd-384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526.scope: Deactivated successfully. Nov 12 20:52:23.914399 systemd[1]: cri-containerd-384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526.scope: Consumed 5.887s CPU time. Nov 12 20:52:23.961273 containerd[1977]: time="2024-11-12T20:52:23.961191967Z" level=info msg="shim disconnected" id=384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526 namespace=k8s.io Nov 12 20:52:23.962712 containerd[1977]: time="2024-11-12T20:52:23.961337456Z" level=warning msg="cleaning up after shim disconnected" id=384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526 namespace=k8s.io Nov 12 20:52:23.962712 containerd[1977]: time="2024-11-12T20:52:23.961352019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:52:23.962094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526-rootfs.mount: Deactivated successfully. Nov 12 20:52:24.891412 kubelet[3348]: I1112 20:52:24.891374 3348 scope.go:117] "RemoveContainer" containerID="384a92b65921f5f5d8e7327a121af1e9adbad23176885aacf5a41d4e62b1e526" Nov 12 20:52:24.898314 containerd[1977]: time="2024-11-12T20:52:24.898268004Z" level=info msg="CreateContainer within sandbox \"f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 12 20:52:24.916736 containerd[1977]: time="2024-11-12T20:52:24.916685731Z" level=info msg="CreateContainer within sandbox \"f07c1e22479bd8bffcc982d0c718a10556d0c736de5d5be1980c636a15c3d36d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"81ce9af0914e137d9301becef8cefde07c1bef38146bc9b6e8c3448315be1e2f\"" Nov 12 20:52:24.917304 containerd[1977]: time="2024-11-12T20:52:24.917274416Z" level=info msg="StartContainer for \"81ce9af0914e137d9301becef8cefde07c1bef38146bc9b6e8c3448315be1e2f\"" Nov 12 20:52:24.965200 systemd[1]: Started cri-containerd-81ce9af0914e137d9301becef8cefde07c1bef38146bc9b6e8c3448315be1e2f.scope - libcontainer container 81ce9af0914e137d9301becef8cefde07c1bef38146bc9b6e8c3448315be1e2f. Nov 12 20:52:24.999407 containerd[1977]: time="2024-11-12T20:52:24.999369634Z" level=info msg="StartContainer for \"81ce9af0914e137d9301becef8cefde07c1bef38146bc9b6e8c3448315be1e2f\" returns successfully" Nov 12 20:52:27.139837 systemd[1]: cri-containerd-14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7.scope: Deactivated successfully. Nov 12 20:52:27.140821 systemd[1]: cri-containerd-14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7.scope: Consumed 1.838s CPU time, 18.0M memory peak, 0B memory swap peak. Nov 12 20:52:27.173557 containerd[1977]: time="2024-11-12T20:52:27.173446119Z" level=info msg="shim disconnected" id=14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7 namespace=k8s.io Nov 12 20:52:27.173557 containerd[1977]: time="2024-11-12T20:52:27.173537422Z" level=warning msg="cleaning up after shim disconnected" id=14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7 namespace=k8s.io Nov 12 20:52:27.173557 containerd[1977]: time="2024-11-12T20:52:27.173550888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:52:27.174869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7-rootfs.mount: Deactivated successfully. Nov 12 20:52:27.594789 kubelet[3348]: E1112 20:52:27.589187 3348 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 12 20:52:27.902411 kubelet[3348]: I1112 20:52:27.902297 3348 scope.go:117] "RemoveContainer" containerID="14aae1e8ef9f0bb91e4a8224aef744c161275fd54745884c407eab877068b7a7" Nov 12 20:52:27.909185 containerd[1977]: time="2024-11-12T20:52:27.909140084Z" level=info msg="CreateContainer within sandbox \"89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 12 20:52:27.948914 containerd[1977]: time="2024-11-12T20:52:27.948089368Z" level=info msg="CreateContainer within sandbox \"89e51a937096a5504cd55cdcd2684ed15e37c91314d0a669be9edf71ec15fb26\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9135e5df89fd731ad91370b092d5df4e40cf586c91a3005cbe6a5444bdc0b903\"" Nov 12 20:52:27.963279 containerd[1977]: time="2024-11-12T20:52:27.958388067Z" level=info msg="StartContainer for \"9135e5df89fd731ad91370b092d5df4e40cf586c91a3005cbe6a5444bdc0b903\"" Nov 12 20:52:27.974487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872023783.mount: Deactivated successfully. Nov 12 20:52:28.032154 systemd[1]: Started cri-containerd-9135e5df89fd731ad91370b092d5df4e40cf586c91a3005cbe6a5444bdc0b903.scope - libcontainer container 9135e5df89fd731ad91370b092d5df4e40cf586c91a3005cbe6a5444bdc0b903. Nov 12 20:52:28.095548 containerd[1977]: time="2024-11-12T20:52:28.095497470Z" level=info msg="StartContainer for \"9135e5df89fd731ad91370b092d5df4e40cf586c91a3005cbe6a5444bdc0b903\" returns successfully" Nov 12 20:52:37.596125 kubelet[3348]: E1112 20:52:37.595796 3348 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"