Oct 8 19:36:01.229447 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 8 19:36:01.229494 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 19:36:01.229519 kernel: KASLR disabled due to lack of seed Oct 8 19:36:01.229536 kernel: efi: EFI v2.7 by EDK II Oct 8 19:36:01.229551 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Oct 8 19:36:01.229567 kernel: ACPI: Early table checksum verification disabled Oct 8 19:36:01.229584 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 8 19:36:01.229600 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 8 19:36:01.229616 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 8 19:36:01.229631 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 8 19:36:01.229651 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 8 19:36:01.229683 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 8 19:36:01.229704 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 8 19:36:01.229720 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 8 19:36:01.229739 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 8 19:36:01.229761 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 8 19:36:01.229778 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 8 19:36:01.229794 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 8 19:36:01.229811 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 8 19:36:01.229827 kernel: printk: bootconsole [uart0] enabled Oct 8 19:36:01.229843 kernel: NUMA: Failed to initialise from firmware Oct 8 19:36:01.229860 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:36:01.229876 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Oct 8 19:36:01.229892 kernel: Zone ranges: Oct 8 19:36:01.229908 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:36:01.229924 kernel: DMA32 empty Oct 8 19:36:01.229944 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 8 19:36:01.229961 kernel: Movable zone start for each node Oct 8 19:36:01.229977 kernel: Early memory node ranges Oct 8 19:36:01.229993 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Oct 8 19:36:01.230009 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Oct 8 19:36:01.230050 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Oct 8 19:36:01.230072 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 8 19:36:01.230089 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 8 19:36:01.230105 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 8 19:36:01.230122 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 8 19:36:01.230138 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 8 19:36:01.230154 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:36:01.230176 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 8 19:36:01.230193 kernel: psci: probing for conduit method from ACPI. Oct 8 19:36:01.230216 kernel: psci: PSCIv1.0 detected in firmware. Oct 8 19:36:01.230234 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:36:01.230251 kernel: psci: Trusted OS migration not required Oct 8 19:36:01.230272 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:36:01.230290 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:36:01.230307 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:36:01.230324 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:36:01.230341 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:36:01.230359 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:36:01.230376 kernel: CPU features: detected: Spectre-v2 Oct 8 19:36:01.230393 kernel: CPU features: detected: Spectre-v3a Oct 8 19:36:01.230410 kernel: CPU features: detected: Spectre-BHB Oct 8 19:36:01.230427 kernel: CPU features: detected: ARM erratum 1742098 Oct 8 19:36:01.230444 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 8 19:36:01.230465 kernel: alternatives: applying boot alternatives Oct 8 19:36:01.230485 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:36:01.230503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:36:01.230521 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:36:01.230538 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:36:01.230556 kernel: Fallback order for Node 0: 0 Oct 8 19:36:01.230573 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 8 19:36:01.230590 kernel: Policy zone: Normal Oct 8 19:36:01.230607 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:36:01.230625 kernel: software IO TLB: area num 2. Oct 8 19:36:01.230642 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 8 19:36:01.230665 kernel: Memory: 3820152K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 210312K reserved, 0K cma-reserved) Oct 8 19:36:01.230682 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:36:01.230699 kernel: trace event string verifier disabled Oct 8 19:36:01.230717 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:36:01.230735 kernel: rcu: RCU event tracing is enabled. Oct 8 19:36:01.230753 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:36:01.230770 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:36:01.230788 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:36:01.230806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:36:01.230823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:36:01.230840 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:36:01.230862 kernel: GICv3: 96 SPIs implemented Oct 8 19:36:01.230879 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:36:01.230896 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:36:01.230913 kernel: GICv3: GICv3 features: 16 PPIs Oct 8 19:36:01.230930 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 8 19:36:01.230947 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 8 19:36:01.230965 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:36:01.230983 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:36:01.231001 kernel: GICv3: using LPI property table @0x00000004000e0000 Oct 8 19:36:01.231018 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 8 19:36:01.231064 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Oct 8 19:36:01.231084 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:36:01.231108 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 8 19:36:01.231125 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 8 19:36:01.231143 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 8 19:36:01.231160 kernel: Console: colour dummy device 80x25 Oct 8 19:36:01.231178 kernel: printk: console [tty1] enabled Oct 8 19:36:01.231195 kernel: ACPI: Core revision 20230628 Oct 8 19:36:01.231213 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 8 19:36:01.231231 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:36:01.231249 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:36:01.231266 kernel: landlock: Up and running. Oct 8 19:36:01.231287 kernel: SELinux: Initializing. Oct 8 19:36:01.231305 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:36:01.231322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:36:01.231340 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:36:01.231358 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:36:01.231375 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:36:01.231393 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:36:01.231410 kernel: Platform MSI: ITS@0x10080000 domain created Oct 8 19:36:01.231428 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 8 19:36:01.231449 kernel: Remapping and enabling EFI services. Oct 8 19:36:01.231467 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:36:01.231484 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:36:01.231501 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 8 19:36:01.231519 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Oct 8 19:36:01.231536 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 8 19:36:01.231554 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:36:01.231571 kernel: SMP: Total of 2 processors activated. Oct 8 19:36:01.231589 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:36:01.231610 kernel: CPU features: detected: 32-bit EL1 Support Oct 8 19:36:01.231628 kernel: CPU features: detected: CRC32 instructions Oct 8 19:36:01.231656 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:36:01.231678 kernel: alternatives: applying system-wide alternatives Oct 8 19:36:01.231696 kernel: devtmpfs: initialized Oct 8 19:36:01.231715 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:36:01.231733 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:36:01.231751 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:36:01.231769 kernel: SMBIOS 3.0.0 present. Oct 8 19:36:01.231792 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 8 19:36:01.231810 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:36:01.231828 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:36:01.231847 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:36:01.231865 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:36:01.231883 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:36:01.231902 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Oct 8 19:36:01.231924 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:36:01.231943 kernel: cpuidle: using governor menu Oct 8 19:36:01.231961 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:36:01.231979 kernel: ASID allocator initialised with 65536 entries Oct 8 19:36:01.231997 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:36:01.232016 kernel: Serial: AMBA PL011 UART driver Oct 8 19:36:01.232054 kernel: Modules: 17504 pages in range for non-PLT usage Oct 8 19:36:01.232075 kernel: Modules: 509024 pages in range for PLT usage Oct 8 19:36:01.232094 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:36:01.232119 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:36:01.232138 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:36:01.232156 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:36:01.232174 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:36:01.232192 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:36:01.232210 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:36:01.232229 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:36:01.232247 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:36:01.232265 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:36:01.232288 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:36:01.232306 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:36:01.232324 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:36:01.232343 kernel: ACPI: Interpreter enabled Oct 8 19:36:01.232361 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:36:01.232379 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:36:01.232397 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 8 19:36:01.232727 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:36:01.232949 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:36:01.233200 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:36:01.233426 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 8 19:36:01.233647 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 8 19:36:01.233693 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 8 19:36:01.233714 kernel: acpiphp: Slot [1] registered Oct 8 19:36:01.233734 kernel: acpiphp: Slot [2] registered Oct 8 19:36:01.233752 kernel: acpiphp: Slot [3] registered Oct 8 19:36:01.233770 kernel: acpiphp: Slot [4] registered Oct 8 19:36:01.233799 kernel: acpiphp: Slot [5] registered Oct 8 19:36:01.233817 kernel: acpiphp: Slot [6] registered Oct 8 19:36:01.233835 kernel: acpiphp: Slot [7] registered Oct 8 19:36:01.233853 kernel: acpiphp: Slot [8] registered Oct 8 19:36:01.233871 kernel: acpiphp: Slot [9] registered Oct 8 19:36:01.233889 kernel: acpiphp: Slot [10] registered Oct 8 19:36:01.233908 kernel: acpiphp: Slot [11] registered Oct 8 19:36:01.233926 kernel: acpiphp: Slot [12] registered Oct 8 19:36:01.233944 kernel: acpiphp: Slot [13] registered Oct 8 19:36:01.233966 kernel: acpiphp: Slot [14] registered Oct 8 19:36:01.233985 kernel: acpiphp: Slot [15] registered Oct 8 19:36:01.234003 kernel: acpiphp: Slot [16] registered Oct 8 19:36:01.234021 kernel: acpiphp: Slot [17] registered Oct 8 19:36:01.234065 kernel: acpiphp: Slot [18] registered Oct 8 19:36:01.234084 kernel: acpiphp: Slot [19] registered Oct 8 19:36:01.234102 kernel: acpiphp: Slot [20] registered Oct 8 19:36:01.234121 kernel: acpiphp: Slot [21] registered Oct 8 19:36:01.234139 kernel: acpiphp: Slot [22] registered Oct 8 19:36:01.234157 kernel: acpiphp: Slot [23] registered Oct 8 19:36:01.234181 kernel: acpiphp: Slot [24] registered Oct 8 19:36:01.234199 kernel: acpiphp: Slot [25] registered Oct 8 19:36:01.234217 kernel: acpiphp: Slot [26] registered Oct 8 19:36:01.234235 kernel: acpiphp: Slot [27] registered Oct 8 19:36:01.234253 kernel: acpiphp: Slot [28] registered Oct 8 19:36:01.234271 kernel: acpiphp: Slot [29] registered Oct 8 19:36:01.234289 kernel: acpiphp: Slot [30] registered Oct 8 19:36:01.234307 kernel: acpiphp: Slot [31] registered Oct 8 19:36:01.234325 kernel: PCI host bridge to bus 0000:00 Oct 8 19:36:01.234546 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 8 19:36:01.234737 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:36:01.234924 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 8 19:36:01.237583 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 8 19:36:01.237888 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 8 19:36:01.238154 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 8 19:36:01.238377 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 8 19:36:01.238609 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 8 19:36:01.238821 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 8 19:36:01.239092 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:36:01.239325 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 8 19:36:01.239537 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 8 19:36:01.244297 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 8 19:36:01.244555 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 8 19:36:01.244762 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:36:01.244968 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 8 19:36:01.248435 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 8 19:36:01.248692 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 8 19:36:01.248903 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 8 19:36:01.249169 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 8 19:36:01.249385 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 8 19:36:01.249573 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:36:01.249788 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 8 19:36:01.249815 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:36:01.249836 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:36:01.249855 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:36:01.249874 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:36:01.249893 kernel: iommu: Default domain type: Translated Oct 8 19:36:01.249919 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:36:01.249937 kernel: efivars: Registered efivars operations Oct 8 19:36:01.249956 kernel: vgaarb: loaded Oct 8 19:36:01.249974 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:36:01.249992 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:36:01.250012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:36:01.252264 kernel: pnp: PnP ACPI init Oct 8 19:36:01.252532 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 8 19:36:01.252570 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:36:01.252589 kernel: NET: Registered PF_INET protocol family Oct 8 19:36:01.252608 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:36:01.252627 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:36:01.252646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:36:01.252665 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:36:01.252683 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:36:01.252702 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:36:01.252720 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:36:01.252743 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:36:01.252762 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:36:01.252780 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:36:01.252799 kernel: kvm [1]: HYP mode not available Oct 8 19:36:01.252817 kernel: Initialise system trusted keyrings Oct 8 19:36:01.252836 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:36:01.252854 kernel: Key type asymmetric registered Oct 8 19:36:01.252873 kernel: Asymmetric key parser 'x509' registered Oct 8 19:36:01.252891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:36:01.252914 kernel: io scheduler mq-deadline registered Oct 8 19:36:01.252933 kernel: io scheduler kyber registered Oct 8 19:36:01.252952 kernel: io scheduler bfq registered Oct 8 19:36:01.253207 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 8 19:36:01.253239 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:36:01.253258 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:36:01.253277 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Oct 8 19:36:01.253296 kernel: ACPI: button: Sleep Button [SLPB] Oct 8 19:36:01.253315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:36:01.253343 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:36:01.253560 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 8 19:36:01.253590 kernel: printk: console [ttyS0] disabled Oct 8 19:36:01.253610 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 8 19:36:01.253629 kernel: printk: console [ttyS0] enabled Oct 8 19:36:01.253649 kernel: printk: bootconsole [uart0] disabled Oct 8 19:36:01.253685 kernel: thunder_xcv, ver 1.0 Oct 8 19:36:01.253709 kernel: thunder_bgx, ver 1.0 Oct 8 19:36:01.253728 kernel: nicpf, ver 1.0 Oct 8 19:36:01.253754 kernel: nicvf, ver 1.0 Oct 8 19:36:01.254004 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:36:01.258353 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:36:00 UTC (1728416160) Oct 8 19:36:01.258416 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:36:01.258438 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 8 19:36:01.258457 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:36:01.258477 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:36:01.258495 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:36:01.258527 kernel: Segment Routing with IPv6 Oct 8 19:36:01.260564 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:36:01.260584 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:36:01.260603 kernel: Key type dns_resolver registered Oct 8 19:36:01.260621 kernel: registered taskstats version 1 Oct 8 19:36:01.260640 kernel: Loading compiled-in X.509 certificates Oct 8 19:36:01.260659 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 19:36:01.260677 kernel: Key type .fscrypt registered Oct 8 19:36:01.260695 kernel: Key type fscrypt-provisioning registered Oct 8 19:36:01.260721 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:36:01.260740 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:36:01.260758 kernel: ima: No architecture policies found Oct 8 19:36:01.260776 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:36:01.260794 kernel: clk: Disabling unused clocks Oct 8 19:36:01.260813 kernel: Freeing unused kernel memory: 39360K Oct 8 19:36:01.260831 kernel: Run /init as init process Oct 8 19:36:01.260849 kernel: with arguments: Oct 8 19:36:01.260867 kernel: /init Oct 8 19:36:01.260889 kernel: with environment: Oct 8 19:36:01.260907 kernel: HOME=/ Oct 8 19:36:01.260925 kernel: TERM=linux Oct 8 19:36:01.260943 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:36:01.260966 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:36:01.260990 systemd[1]: Detected virtualization amazon. Oct 8 19:36:01.261010 systemd[1]: Detected architecture arm64. Oct 8 19:36:01.261050 systemd[1]: Running in initrd. Oct 8 19:36:01.261079 systemd[1]: No hostname configured, using default hostname. Oct 8 19:36:01.261099 systemd[1]: Hostname set to . Oct 8 19:36:01.261121 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:36:01.261140 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:36:01.261160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:36:01.261181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:36:01.261202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:36:01.261223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:36:01.261248 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:36:01.261269 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:36:01.261292 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:36:01.261313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:36:01.261334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:36:01.261354 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:36:01.261378 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:36:01.261399 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:36:01.261419 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:36:01.261439 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:36:01.261459 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:36:01.261479 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:36:01.261499 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:36:01.261520 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:36:01.261540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:36:01.261565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:36:01.261585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:36:01.261605 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:36:01.261626 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:36:01.261646 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:36:01.261680 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:36:01.261705 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:36:01.261726 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:36:01.261746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:36:01.261772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:36:01.261793 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:36:01.261813 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:36:01.261833 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:36:01.261855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:36:01.261923 systemd-journald[251]: Collecting audit messages is disabled. Oct 8 19:36:01.261967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:36:01.261988 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:36:01.262012 kernel: Bridge firewalling registered Oct 8 19:36:01.262050 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:36:01.262074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:36:01.262096 systemd-journald[251]: Journal started Oct 8 19:36:01.262137 systemd-journald[251]: Runtime Journal (/run/log/journal/ec287f61cba07683e44d8c9cded94468) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:36:01.205348 systemd-modules-load[252]: Inserted module 'overlay' Oct 8 19:36:01.250168 systemd-modules-load[252]: Inserted module 'br_netfilter' Oct 8 19:36:01.269730 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:36:01.271414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:36:01.293311 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:36:01.309321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:36:01.327420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:36:01.337451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:36:01.353520 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:36:01.359585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:36:01.372615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:36:01.388084 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:36:01.403293 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:36:01.412803 dracut-cmdline[282]: dracut-dracut-053 Oct 8 19:36:01.421640 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:36:01.487319 systemd-resolved[291]: Positive Trust Anchors: Oct 8 19:36:01.487356 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:36:01.487422 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:36:01.585066 kernel: SCSI subsystem initialized Oct 8 19:36:01.592075 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:36:01.605898 kernel: iscsi: registered transport (tcp) Oct 8 19:36:01.627153 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:36:01.627225 kernel: QLogic iSCSI HBA Driver Oct 8 19:36:01.730069 kernel: random: crng init done Oct 8 19:36:01.728363 systemd-resolved[291]: Defaulting to hostname 'linux'. Oct 8 19:36:01.732542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:36:01.736832 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:36:01.754977 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:36:01.765434 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:36:01.802544 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:36:01.802633 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:36:01.804234 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:36:01.886049 kernel: raid6: neonx8 gen() 6662 MB/s Oct 8 19:36:01.888073 kernel: raid6: neonx4 gen() 6471 MB/s Oct 8 19:36:01.904064 kernel: raid6: neonx2 gen() 5384 MB/s Oct 8 19:36:01.921060 kernel: raid6: neonx1 gen() 3952 MB/s Oct 8 19:36:01.938065 kernel: raid6: int64x8 gen() 3805 MB/s Oct 8 19:36:01.955059 kernel: raid6: int64x4 gen() 3692 MB/s Oct 8 19:36:01.972069 kernel: raid6: int64x2 gen() 3565 MB/s Oct 8 19:36:01.989810 kernel: raid6: int64x1 gen() 2768 MB/s Oct 8 19:36:01.989845 kernel: raid6: using algorithm neonx8 gen() 6662 MB/s Oct 8 19:36:02.007794 kernel: raid6: .... xor() 4882 MB/s, rmw enabled Oct 8 19:36:02.007855 kernel: raid6: using neon recovery algorithm Oct 8 19:36:02.016365 kernel: xor: measuring software checksum speed Oct 8 19:36:02.016443 kernel: 8regs : 10971 MB/sec Oct 8 19:36:02.017453 kernel: 32regs : 11952 MB/sec Oct 8 19:36:02.018618 kernel: arm64_neon : 9324 MB/sec Oct 8 19:36:02.018650 kernel: xor: using function: 32regs (11952 MB/sec) Oct 8 19:36:02.104081 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:36:02.126247 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:36:02.136361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:36:02.182751 systemd-udevd[469]: Using default interface naming scheme 'v255'. Oct 8 19:36:02.191890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:36:02.213482 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:36:02.243771 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Oct 8 19:36:02.305608 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:36:02.315356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:36:02.434086 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:36:02.445320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:36:02.490243 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:36:02.497016 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:36:02.499518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:36:02.504390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:36:02.527204 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:36:02.583989 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:36:02.623678 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:36:02.623756 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 8 19:36:02.635916 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 8 19:36:02.636367 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 8 19:36:02.647752 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:16:ed:38:f4:d7 Oct 8 19:36:02.650676 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:36:02.667845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:36:02.668246 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:36:02.677985 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:36:02.680289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:36:02.680597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:36:02.682892 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:36:02.709078 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:36:02.709141 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 8 19:36:02.710831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:36:02.720080 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 8 19:36:02.731400 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:36:02.731468 kernel: GPT:9289727 != 16777215 Oct 8 19:36:02.732612 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:36:02.733352 kernel: GPT:9289727 != 16777215 Oct 8 19:36:02.734415 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:36:02.735284 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:36:02.749317 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:36:02.762323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:36:02.806783 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:36:02.869153 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (540) Oct 8 19:36:02.887859 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 8 19:36:02.905956 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (518) Oct 8 19:36:02.971221 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 8 19:36:03.009368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:36:03.024075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 8 19:36:03.026556 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 8 19:36:03.043441 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:36:03.055143 disk-uuid[660]: Primary Header is updated. Oct 8 19:36:03.055143 disk-uuid[660]: Secondary Entries is updated. Oct 8 19:36:03.055143 disk-uuid[660]: Secondary Header is updated. Oct 8 19:36:03.064141 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:36:03.075068 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:36:03.083071 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:36:04.084115 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:36:04.084578 disk-uuid[661]: The operation has completed successfully. Oct 8 19:36:04.270153 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:36:04.272470 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:36:04.324084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:36:04.333272 sh[1003]: Success Oct 8 19:36:04.362114 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:36:04.461441 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:36:04.485290 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:36:04.490129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:36:04.527652 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 19:36:04.527727 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:36:04.527758 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:36:04.530623 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:36:04.530671 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:36:04.627053 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:36:04.649485 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:36:04.653375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:36:04.671652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:36:04.678475 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:36:04.703114 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:36:04.703232 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:36:04.703261 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:36:04.710109 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:36:04.732736 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:36:04.735114 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:36:04.758952 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:36:04.771458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:36:04.870140 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:36:04.885497 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:36:04.942723 systemd-networkd[1195]: lo: Link UP Oct 8 19:36:04.942745 systemd-networkd[1195]: lo: Gained carrier Oct 8 19:36:04.947694 systemd-networkd[1195]: Enumeration completed Oct 8 19:36:04.948442 systemd-networkd[1195]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:36:04.948449 systemd-networkd[1195]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:36:04.950061 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:36:04.957183 systemd[1]: Reached target network.target - Network. Oct 8 19:36:04.962231 systemd-networkd[1195]: eth0: Link UP Oct 8 19:36:04.962239 systemd-networkd[1195]: eth0: Gained carrier Oct 8 19:36:04.962261 systemd-networkd[1195]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:36:04.983125 systemd-networkd[1195]: eth0: DHCPv4 address 172.31.17.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:36:05.168089 ignition[1116]: Ignition 2.19.0 Oct 8 19:36:05.168121 ignition[1116]: Stage: fetch-offline Oct 8 19:36:05.169737 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:05.169770 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:05.171445 ignition[1116]: Ignition finished successfully Oct 8 19:36:05.178330 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:36:05.196465 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:36:05.220901 ignition[1205]: Ignition 2.19.0 Oct 8 19:36:05.220931 ignition[1205]: Stage: fetch Oct 8 19:36:05.222614 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:05.222641 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:05.223696 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:05.233921 ignition[1205]: PUT result: OK Oct 8 19:36:05.237205 ignition[1205]: parsed url from cmdline: "" Oct 8 19:36:05.237231 ignition[1205]: no config URL provided Oct 8 19:36:05.237247 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:36:05.237273 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:36:05.237321 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:05.240817 ignition[1205]: PUT result: OK Oct 8 19:36:05.240933 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 8 19:36:05.247973 ignition[1205]: GET result: OK Oct 8 19:36:05.248165 ignition[1205]: parsing config with SHA512: 8e85c6075037ef410a1508f5fd82bffcb1400820fc928f93f4404988c19451caf42aac57ac4d87236a298e8a58e253115969b81fd9280e35bf177ac77bd1160e Oct 8 19:36:05.257975 unknown[1205]: fetched base config from "system" Oct 8 19:36:05.257997 unknown[1205]: fetched base config from "system" Oct 8 19:36:05.258858 ignition[1205]: fetch: fetch complete Oct 8 19:36:05.258010 unknown[1205]: fetched user config from "aws" Oct 8 19:36:05.258870 ignition[1205]: fetch: fetch passed Oct 8 19:36:05.258954 ignition[1205]: Ignition finished successfully Oct 8 19:36:05.270069 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:36:05.283332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:36:05.308179 ignition[1212]: Ignition 2.19.0 Oct 8 19:36:05.308208 ignition[1212]: Stage: kargs Oct 8 19:36:05.309169 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:05.309196 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:05.309362 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:05.310474 ignition[1212]: PUT result: OK Oct 8 19:36:05.322412 ignition[1212]: kargs: kargs passed Oct 8 19:36:05.322523 ignition[1212]: Ignition finished successfully Oct 8 19:36:05.327152 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:36:05.336394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:36:05.368498 ignition[1218]: Ignition 2.19.0 Oct 8 19:36:05.368527 ignition[1218]: Stage: disks Oct 8 19:36:05.370234 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:05.370259 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:05.371165 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:05.373310 ignition[1218]: PUT result: OK Oct 8 19:36:05.388784 ignition[1218]: disks: disks passed Oct 8 19:36:05.388928 ignition[1218]: Ignition finished successfully Oct 8 19:36:05.393666 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:36:05.398786 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:36:05.402870 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:36:05.407268 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:36:05.410834 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:36:05.414423 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:36:05.433795 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:36:05.481502 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:36:05.489761 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:36:05.501426 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:36:05.596594 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 19:36:05.597737 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:36:05.599179 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:36:05.624206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:36:05.630229 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:36:05.634306 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:36:05.634405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:36:05.634456 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:36:05.658060 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1246) Oct 8 19:36:05.662230 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:36:05.662297 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:36:05.662335 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:36:05.661758 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:36:05.675629 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:36:05.673740 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:36:05.689562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:36:06.075302 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:36:06.083383 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:36:06.092478 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:36:06.101888 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:36:06.382197 systemd-networkd[1195]: eth0: Gained IPv6LL Oct 8 19:36:06.408733 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:36:06.419385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:36:06.433409 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:36:06.453387 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:36:06.460158 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:36:06.477473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:36:06.507902 ignition[1362]: INFO : Ignition 2.19.0 Oct 8 19:36:06.507902 ignition[1362]: INFO : Stage: mount Oct 8 19:36:06.511213 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:06.511213 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:06.511213 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:06.517784 ignition[1362]: INFO : PUT result: OK Oct 8 19:36:06.524867 ignition[1362]: INFO : mount: mount passed Oct 8 19:36:06.526575 ignition[1362]: INFO : Ignition finished successfully Oct 8 19:36:06.531134 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:36:06.546403 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:36:06.609111 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:36:06.625241 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1371) Oct 8 19:36:06.628931 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:36:06.628976 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:36:06.630155 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:36:06.635072 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:36:06.638122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:36:06.676550 ignition[1388]: INFO : Ignition 2.19.0 Oct 8 19:36:06.676550 ignition[1388]: INFO : Stage: files Oct 8 19:36:06.679934 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:06.679934 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:06.679934 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:06.686582 ignition[1388]: INFO : PUT result: OK Oct 8 19:36:06.691738 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:36:06.695080 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:36:06.695080 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:36:06.713760 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:36:06.716352 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:36:06.718780 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:36:06.717934 unknown[1388]: wrote ssh authorized keys file for user: core Oct 8 19:36:06.730838 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:36:06.735648 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:36:06.735648 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:36:06.735648 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:36:06.847611 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:36:06.995104 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:36:06.995104 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:36:06.995104 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:36:06.995104 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:36:06.995104 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:36:06.995104 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:36:07.014966 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:36:07.014966 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:36:07.014966 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:36:07.014966 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:36:07.014966 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:36:07.014966 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:36:07.036357 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:36:07.036357 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:36:07.036357 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 19:36:07.358264 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 19:36:07.712119 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:36:07.712119 ignition[1388]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:36:07.721098 ignition[1388]: INFO : files: files passed Oct 8 19:36:07.721098 ignition[1388]: INFO : Ignition finished successfully Oct 8 19:36:07.730200 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:36:07.768400 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:36:07.781399 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:36:07.795463 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:36:07.796140 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:36:07.815561 initrd-setup-root-after-ignition[1417]: grep: Oct 8 19:36:07.815561 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:36:07.820670 initrd-setup-root-after-ignition[1417]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:36:07.820670 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:36:07.828961 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:36:07.831795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:36:07.847416 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:36:07.899524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:36:07.899727 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:36:07.903389 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:36:07.907777 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:36:07.909983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:36:07.925406 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:36:07.954124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:36:07.964270 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:36:07.991321 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:36:07.995699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:36:07.997323 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:36:07.998131 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:36:07.998370 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:36:07.999308 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:36:07.999613 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:36:07.999913 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:36:08.000505 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:36:08.000807 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:36:08.001130 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:36:08.001403 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:36:08.001733 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:36:08.002020 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:36:08.002302 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:36:08.002539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:36:08.002747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:36:08.003466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:36:08.003802 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:36:08.004021 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:36:08.024353 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:36:08.024595 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:36:08.110552 ignition[1441]: INFO : Ignition 2.19.0 Oct 8 19:36:08.110552 ignition[1441]: INFO : Stage: umount Oct 8 19:36:08.110552 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:36:08.110552 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:36:08.110552 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:36:08.110552 ignition[1441]: INFO : PUT result: OK Oct 8 19:36:08.024821 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:36:08.141702 ignition[1441]: INFO : umount: umount passed Oct 8 19:36:08.141702 ignition[1441]: INFO : Ignition finished successfully Oct 8 19:36:08.025449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:36:08.025684 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:36:08.037953 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:36:08.038189 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:36:08.064815 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:36:08.090562 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:36:08.090879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:36:08.115295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:36:08.123148 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:36:08.123424 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:36:08.136794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:36:08.138250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:36:08.159279 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:36:08.161308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:36:08.189292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:36:08.192670 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:36:08.197730 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:36:08.203776 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:36:08.203888 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:36:08.206160 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:36:08.206244 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:36:08.208406 systemd[1]: Stopped target network.target - Network. Oct 8 19:36:08.210126 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:36:08.210212 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:36:08.213578 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:36:08.217727 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:36:08.228262 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:36:08.237209 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:36:08.239191 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:36:08.241060 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:36:08.241147 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:36:08.243083 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:36:08.243152 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:36:08.245160 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:36:08.245245 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:36:08.247166 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:36:08.247242 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:36:08.251355 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:36:08.265206 systemd-networkd[1195]: eth0: DHCPv6 lease lost Oct 8 19:36:08.267141 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:36:08.274639 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:36:08.275042 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:36:08.282272 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:36:08.282485 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:36:08.300902 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:36:08.303467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:36:08.309542 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:36:08.309854 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:36:08.320945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:36:08.321857 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:36:08.329229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:36:08.329342 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:36:08.350457 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:36:08.352804 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:36:08.353004 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:36:08.363213 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:36:08.363314 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:36:08.365386 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:36:08.365476 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:36:08.367984 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:36:08.368124 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:36:08.376334 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:36:08.406479 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:36:08.406706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:36:08.413889 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:36:08.415865 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:36:08.419633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:36:08.419718 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:36:08.423369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:36:08.423438 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:36:08.426554 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:36:08.426651 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:36:08.428968 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:36:08.429088 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:36:08.432963 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:36:08.433105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:36:08.455366 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:36:08.463336 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:36:08.463461 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:36:08.465882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:36:08.465964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:36:08.490871 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:36:08.491863 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:36:08.498317 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:36:08.510280 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:36:08.563334 systemd[1]: Switching root. Oct 8 19:36:08.588020 systemd-journald[251]: Journal stopped Oct 8 19:36:11.599962 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Oct 8 19:36:11.600137 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:36:11.600188 kernel: SELinux: policy capability open_perms=1 Oct 8 19:36:11.600222 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:36:11.600255 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:36:11.600294 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:36:11.600325 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:36:11.600359 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:36:11.600392 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:36:11.600425 kernel: audit: type=1403 audit(1728416169.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:36:11.600468 systemd[1]: Successfully loaded SELinux policy in 68.873ms. Oct 8 19:36:11.600533 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.682ms. Oct 8 19:36:11.600569 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:36:11.600603 systemd[1]: Detected virtualization amazon. Oct 8 19:36:11.600642 systemd[1]: Detected architecture arm64. Oct 8 19:36:11.600674 systemd[1]: Detected first boot. Oct 8 19:36:11.600707 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:36:11.600745 zram_generator::config[1501]: No configuration found. Oct 8 19:36:11.600779 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:36:11.600815 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:36:11.600851 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 8 19:36:11.600885 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:36:11.600924 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:36:11.600957 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:36:11.600989 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:36:11.601019 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:36:11.626167 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:36:11.626216 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:36:11.626251 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:36:11.626282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:36:11.626323 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:36:11.626357 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:36:11.626396 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:36:11.626428 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:36:11.626460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:36:11.626490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:36:11.626523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:36:11.626557 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:36:11.626587 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:36:11.626630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:36:11.626661 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:36:11.626697 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:36:11.626733 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:36:11.626765 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:36:11.626796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:36:11.626827 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:36:11.626857 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:36:11.626888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:36:11.626928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:36:11.626957 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:36:11.626990 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:36:11.627428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:36:11.631661 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:36:11.631697 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:36:11.631728 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:36:11.631759 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:36:11.631800 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:36:11.631833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:36:11.631866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:36:11.631896 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:36:11.631928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:36:11.631958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:36:11.631988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:36:11.632020 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:36:11.632527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:36:11.635262 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:36:11.635356 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 19:36:11.635396 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 19:36:11.635426 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:36:11.635457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:36:11.635489 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:36:11.635524 kernel: fuse: init (API version 7.39) Oct 8 19:36:11.635555 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:36:11.635597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:36:11.635631 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:36:11.635660 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:36:11.635694 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:36:11.635723 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:36:11.635753 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:36:11.635782 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:36:11.635811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:36:11.635840 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:36:11.635930 systemd-journald[1597]: Collecting audit messages is disabled. Oct 8 19:36:11.635997 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:36:11.636050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:36:11.636084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:36:11.636115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:36:11.636145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:36:11.636181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:36:11.636211 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:36:11.636240 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:36:11.636269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:36:11.636301 kernel: loop: module loaded Oct 8 19:36:11.636330 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:36:11.636359 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:36:11.636388 systemd-journald[1597]: Journal started Oct 8 19:36:11.636442 systemd-journald[1597]: Runtime Journal (/run/log/journal/ec287f61cba07683e44d8c9cded94468) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:36:11.665087 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:36:11.678299 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:36:11.686086 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:36:11.701055 kernel: ACPI: bus type drm_connector registered Oct 8 19:36:11.701163 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:36:11.713062 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:36:11.729072 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:36:11.751112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:36:11.773209 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:36:11.787067 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:36:11.791040 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:36:11.794978 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:36:11.797266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:36:11.801659 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:36:11.802327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:36:11.805458 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:36:11.807946 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:36:11.817934 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:36:11.858593 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:36:11.869415 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:36:11.874303 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:36:11.915194 systemd-journald[1597]: Time spent on flushing to /var/log/journal/ec287f61cba07683e44d8c9cded94468 is 88.396ms for 898 entries. Oct 8 19:36:11.915194 systemd-journald[1597]: System Journal (/var/log/journal/ec287f61cba07683e44d8c9cded94468) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:36:12.024981 systemd-journald[1597]: Received client request to flush runtime journal. Oct 8 19:36:11.911610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:36:11.913649 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Oct 8 19:36:11.913675 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Oct 8 19:36:11.924547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:36:11.944671 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:36:11.982890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:36:12.001439 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:36:12.025148 udevadm[1670]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:36:12.031939 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:36:12.090337 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:36:12.103761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:36:12.149725 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Oct 8 19:36:12.150387 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Oct 8 19:36:12.160486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:36:12.902181 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:36:12.915469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:36:12.972767 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Oct 8 19:36:13.058209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:36:13.072396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:36:13.116363 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:36:13.213016 (udev-worker)[1699]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:36:13.219994 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 8 19:36:13.264193 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1691) Oct 8 19:36:13.283362 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1691) Oct 8 19:36:13.281180 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:36:13.460468 systemd-networkd[1685]: lo: Link UP Oct 8 19:36:13.460957 systemd-networkd[1685]: lo: Gained carrier Oct 8 19:36:13.465280 systemd-networkd[1685]: Enumeration completed Oct 8 19:36:13.466986 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:36:13.467004 systemd-networkd[1685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:36:13.467242 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:36:13.479349 systemd-networkd[1685]: eth0: Link UP Oct 8 19:36:13.479731 systemd-networkd[1685]: eth0: Gained carrier Oct 8 19:36:13.479765 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:36:13.482443 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:36:13.493939 systemd-networkd[1685]: eth0: DHCPv4 address 172.31.17.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:36:13.529138 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1705) Oct 8 19:36:13.553861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:36:13.755960 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:36:13.772981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:36:13.782458 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:36:13.786889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:36:13.834063 lvm[1809]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:36:13.872755 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:36:13.875551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:36:13.888661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:36:13.910491 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:36:13.950299 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:36:13.953732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:36:13.957137 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:36:13.957472 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:36:13.959640 systemd[1]: Reached target machines.target - Containers. Oct 8 19:36:13.963926 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:36:13.984528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:36:13.992323 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:36:13.995630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:36:14.012471 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:36:14.019366 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:36:14.030495 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:36:14.034822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:36:14.066665 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:36:14.088088 kernel: loop0: detected capacity change from 0 to 114432 Oct 8 19:36:14.108312 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:36:14.109690 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:36:14.183202 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:36:14.206085 kernel: loop1: detected capacity change from 0 to 194512 Oct 8 19:36:14.261088 kernel: loop2: detected capacity change from 0 to 52536 Oct 8 19:36:14.375102 kernel: loop3: detected capacity change from 0 to 114328 Oct 8 19:36:14.478121 kernel: loop4: detected capacity change from 0 to 114432 Oct 8 19:36:14.491226 kernel: loop5: detected capacity change from 0 to 194512 Oct 8 19:36:14.510103 kernel: loop6: detected capacity change from 0 to 52536 Oct 8 19:36:14.527072 kernel: loop7: detected capacity change from 0 to 114328 Oct 8 19:36:14.539331 (sd-merge)[1840]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 8 19:36:14.540941 (sd-merge)[1840]: Merged extensions into '/usr'. Oct 8 19:36:14.549080 systemd[1]: Reloading requested from client PID 1826 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:36:14.549284 systemd[1]: Reloading... Oct 8 19:36:14.702094 zram_generator::config[1868]: No configuration found. Oct 8 19:36:14.973939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:36:15.117188 systemd[1]: Reloading finished in 566 ms. Oct 8 19:36:15.150096 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:36:15.167449 systemd[1]: Starting ensure-sysext.service... Oct 8 19:36:15.177460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:36:15.203835 systemd[1]: Reloading requested from client PID 1925 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:36:15.203865 systemd[1]: Reloading... Oct 8 19:36:15.235906 ldconfig[1822]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:36:15.245634 systemd-tmpfiles[1926]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:36:15.246875 systemd-tmpfiles[1926]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:36:15.248784 systemd-tmpfiles[1926]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:36:15.249439 systemd-tmpfiles[1926]: ACLs are not supported, ignoring. Oct 8 19:36:15.249601 systemd-tmpfiles[1926]: ACLs are not supported, ignoring. Oct 8 19:36:15.258114 systemd-tmpfiles[1926]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:36:15.258136 systemd-tmpfiles[1926]: Skipping /boot Oct 8 19:36:15.279935 systemd-tmpfiles[1926]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:36:15.280192 systemd-tmpfiles[1926]: Skipping /boot Oct 8 19:36:15.376061 zram_generator::config[1961]: No configuration found. Oct 8 19:36:15.470186 systemd-networkd[1685]: eth0: Gained IPv6LL Oct 8 19:36:15.618852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:36:15.760823 systemd[1]: Reloading finished in 556 ms. Oct 8 19:36:15.788257 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:36:15.791770 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:36:15.803334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:36:15.821463 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:36:15.828328 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:36:15.842515 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:36:15.854640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:36:15.873317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:36:15.892642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:36:15.900182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:36:15.916712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:36:15.941002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:36:15.944880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:36:15.956894 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:36:15.970329 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:36:15.970733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:36:15.994426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:36:15.994819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:36:16.001017 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:36:16.005859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:36:16.006281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:36:16.037844 systemd[1]: Finished ensure-sysext.service. Oct 8 19:36:16.047874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:36:16.054758 augenrules[2055]: No rules Oct 8 19:36:16.058290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:36:16.074263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:36:16.081778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:36:16.106722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:36:16.109711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:36:16.109892 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:36:16.124210 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:36:16.131386 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:36:16.136365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:36:16.136987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:36:16.142606 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:36:16.143244 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:36:16.161716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:36:16.162474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:36:16.167909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:36:16.184188 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:36:16.186583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:36:16.190178 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:36:16.199915 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:36:16.205996 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:36:16.215498 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:36:16.260499 systemd-resolved[2024]: Positive Trust Anchors: Oct 8 19:36:16.260529 systemd-resolved[2024]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:36:16.260593 systemd-resolved[2024]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:36:16.269276 systemd-resolved[2024]: Defaulting to hostname 'linux'. Oct 8 19:36:16.272521 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:36:16.274920 systemd[1]: Reached target network.target - Network. Oct 8 19:36:16.276800 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:36:16.278799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:36:16.281018 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:36:16.283305 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:36:16.285631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:36:16.288235 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:36:16.290416 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:36:16.292776 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:36:16.295156 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:36:16.295221 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:36:16.296966 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:36:16.299945 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:36:16.304913 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:36:16.309126 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:36:16.314175 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:36:16.316553 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:36:16.318665 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:36:16.320936 systemd[1]: System is tainted: cgroupsv1 Oct 8 19:36:16.321236 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:36:16.321292 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:36:16.332278 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:36:16.339159 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:36:16.347418 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:36:16.360247 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:36:16.369416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:36:16.372394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:36:16.383216 jq[2088]: false Oct 8 19:36:16.384461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:16.403649 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:36:16.419492 systemd[1]: Started ntpd.service - Network Time Service. Oct 8 19:36:16.438293 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:36:16.449605 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:36:16.477537 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 8 19:36:16.495638 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:36:16.511255 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:36:16.523696 dbus-daemon[2087]: [system] SELinux support is enabled Oct 8 19:36:16.530856 extend-filesystems[2089]: Found loop4 Oct 8 19:36:16.530856 extend-filesystems[2089]: Found loop5 Oct 8 19:36:16.530856 extend-filesystems[2089]: Found loop6 Oct 8 19:36:16.530856 extend-filesystems[2089]: Found loop7 Oct 8 19:36:16.530856 extend-filesystems[2089]: Found nvme0n1 Oct 8 19:36:16.530856 extend-filesystems[2089]: Found nvme0n1p1 Oct 8 19:36:16.530856 extend-filesystems[2089]: Found nvme0n1p2 Oct 8 19:36:16.530224 dbus-daemon[2087]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1685 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 8 19:36:16.556384 extend-filesystems[2089]: Found nvme0n1p3 Oct 8 19:36:16.556384 extend-filesystems[2089]: Found usr Oct 8 19:36:16.556384 extend-filesystems[2089]: Found nvme0n1p4 Oct 8 19:36:16.556384 extend-filesystems[2089]: Found nvme0n1p6 Oct 8 19:36:16.556384 extend-filesystems[2089]: Found nvme0n1p7 Oct 8 19:36:16.556384 extend-filesystems[2089]: Found nvme0n1p9 Oct 8 19:36:16.556384 extend-filesystems[2089]: Checking size of /dev/nvme0n1p9 Oct 8 19:36:16.577950 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:36:16.585292 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:36:16.599321 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:36:16.606903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:36:16.611019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:50:55 UTC 2024 (1): Starting Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: ---------------------------------------------------- Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: corporation. Support and training for ntp-4 are Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: available at https://www.nwtime.org/support Oct 8 19:36:16.632992 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: ---------------------------------------------------- Oct 8 19:36:16.623431 ntpd[2094]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:50:55 UTC 2024 (1): Starting Oct 8 19:36:16.636068 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:36:16.623484 ntpd[2094]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:36:16.636578 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:36:16.623504 ntpd[2094]: ---------------------------------------------------- Oct 8 19:36:16.623526 ntpd[2094]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:36:16.623546 ntpd[2094]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:36:16.623564 ntpd[2094]: corporation. Support and training for ntp-4 are Oct 8 19:36:16.623593 ntpd[2094]: available at https://www.nwtime.org/support Oct 8 19:36:16.623613 ntpd[2094]: ---------------------------------------------------- Oct 8 19:36:16.654408 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:36:16.658213 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: proto: precision = 0.096 usec (-23) Oct 8 19:36:16.655770 ntpd[2094]: proto: precision = 0.096 usec (-23) Oct 8 19:36:16.654955 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:36:16.664569 ntpd[2094]: basedate set to 2024-09-26 Oct 8 19:36:16.667544 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: basedate set to 2024-09-26 Oct 8 19:36:16.667544 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: gps base set to 2024-09-29 (week 2334) Oct 8 19:36:16.664620 ntpd[2094]: gps base set to 2024-09-29 (week 2334) Oct 8 19:36:16.674507 ntpd[2094]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listen normally on 3 eth0 172.31.17.52:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listen normally on 4 lo [::1]:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listen normally on 5 eth0 [fe80::416:edff:fe38:f4d7%2]:123 Oct 8 19:36:16.678111 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: Listening on routing socket on fd #22 for interface updates Oct 8 19:36:16.674605 ntpd[2094]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:36:16.674863 ntpd[2094]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:36:16.674925 ntpd[2094]: Listen normally on 3 eth0 172.31.17.52:123 Oct 8 19:36:16.674991 ntpd[2094]: Listen normally on 4 lo [::1]:123 Oct 8 19:36:16.677295 ntpd[2094]: Listen normally on 5 eth0 [fe80::416:edff:fe38:f4d7%2]:123 Oct 8 19:36:16.677388 ntpd[2094]: Listening on routing socket on fd #22 for interface updates Oct 8 19:36:16.693211 ntpd[2094]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:36:16.697218 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:36:16.697218 ntpd[2094]: 8 Oct 19:36:16 ntpd[2094]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:36:16.693275 ntpd[2094]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:36:16.703217 jq[2117]: true Oct 8 19:36:16.739513 extend-filesystems[2089]: Resized partition /dev/nvme0n1p9 Oct 8 19:36:16.743841 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:36:16.747535 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:36:16.762076 extend-filesystems[2137]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:36:16.791177 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 8 19:36:16.808116 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:36:16.824715 update_engine[2116]: I20241008 19:36:16.810503 2116 main.cc:92] Flatcar Update Engine starting Oct 8 19:36:16.842273 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 8 19:36:16.884266 update_engine[2116]: I20241008 19:36:16.847088 2116 update_check_scheduler.cc:74] Next update check in 9m33s Oct 8 19:36:16.884358 coreos-metadata[2085]: Oct 08 19:36:16.860 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:36:16.884358 coreos-metadata[2085]: Oct 08 19:36:16.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 8 19:36:16.884358 coreos-metadata[2085]: Oct 08 19:36:16.873 INFO Fetch successful Oct 8 19:36:16.884358 coreos-metadata[2085]: Oct 08 19:36:16.873 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 8 19:36:16.884358 coreos-metadata[2085]: Oct 08 19:36:16.880 INFO Fetch successful Oct 8 19:36:16.884358 coreos-metadata[2085]: Oct 08 19:36:16.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 8 19:36:16.857180 (ntainerd)[2143]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:36:16.915745 extend-filesystems[2137]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 8 19:36:16.915745 extend-filesystems[2137]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:36:16.915745 extend-filesystems[2137]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 8 19:36:16.888842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.885 INFO Fetch successful Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.897 INFO Fetch successful Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.897 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.913 INFO Fetch failed with 404: resource not found Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.923 INFO Fetch successful Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.923 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.936 INFO Fetch successful Oct 8 19:36:16.942987 coreos-metadata[2085]: Oct 08 19:36:16.936 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 8 19:36:16.952433 tar[2128]: linux-arm64/helm Oct 8 19:36:16.969218 extend-filesystems[2089]: Resized filesystem in /dev/nvme0n1p9 Oct 8 19:36:16.928409 dbus-daemon[2087]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 19:36:16.888918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:36:16.972612 coreos-metadata[2085]: Oct 08 19:36:16.948 INFO Fetch successful Oct 8 19:36:16.972612 coreos-metadata[2085]: Oct 08 19:36:16.948 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 8 19:36:16.972612 coreos-metadata[2085]: Oct 08 19:36:16.955 INFO Fetch successful Oct 8 19:36:16.972612 coreos-metadata[2085]: Oct 08 19:36:16.955 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 8 19:36:16.972612 coreos-metadata[2085]: Oct 08 19:36:16.961 INFO Fetch successful Oct 8 19:36:16.972931 jq[2138]: true Oct 8 19:36:16.893381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:36:16.893423 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:36:16.909193 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:36:16.909817 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:36:16.928878 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:36:17.030438 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 8 19:36:17.041988 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:36:17.047801 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:36:17.145321 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 8 19:36:17.177526 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 8 19:36:17.241556 systemd-logind[2113]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:36:17.241627 systemd-logind[2113]: Watching system buttons on /dev/input/event1 (Sleep Button) Oct 8 19:36:17.245755 systemd-logind[2113]: New seat seat0. Oct 8 19:36:17.247380 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:36:17.293690 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:36:17.296927 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:36:17.386527 bash[2207]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:36:17.391363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:36:17.421059 systemd[1]: Starting sshkeys.service... Oct 8 19:36:17.465723 amazon-ssm-agent[2178]: Initializing new seelog logger Oct 8 19:36:17.465723 amazon-ssm-agent[2178]: New Seelog Logger Creation Complete Oct 8 19:36:17.465723 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.465723 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.465723 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 processing appconfig overrides Oct 8 19:36:17.472396 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.472396 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.472396 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 processing appconfig overrides Oct 8 19:36:17.480555 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.480555 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.480555 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 processing appconfig overrides Oct 8 19:36:17.480555 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO Proxy environment variables: Oct 8 19:36:17.487911 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:36:17.499764 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:36:17.509555 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.509555 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:36:17.509555 amazon-ssm-agent[2178]: 2024/10/08 19:36:17 processing appconfig overrides Oct 8 19:36:17.580956 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO https_proxy: Oct 8 19:36:17.681414 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO http_proxy: Oct 8 19:36:17.704063 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2200) Oct 8 19:36:17.757193 coreos-metadata[2221]: Oct 08 19:36:17.755 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:36:17.763835 coreos-metadata[2221]: Oct 08 19:36:17.762 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 8 19:36:17.770573 coreos-metadata[2221]: Oct 08 19:36:17.770 INFO Fetch successful Oct 8 19:36:17.770835 coreos-metadata[2221]: Oct 08 19:36:17.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 19:36:17.777103 coreos-metadata[2221]: Oct 08 19:36:17.775 INFO Fetch successful Oct 8 19:36:17.785092 unknown[2221]: wrote ssh authorized keys file for user: core Oct 8 19:36:17.795497 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO no_proxy: Oct 8 19:36:17.874063 locksmithd[2170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:36:17.888994 containerd[2143]: time="2024-10-08T19:36:17.888871670Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:36:17.898874 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO Checking if agent identity type OnPrem can be assumed Oct 8 19:36:17.929328 update-ssh-keys[2261]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:36:17.934489 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:36:17.960405 systemd[1]: Finished sshkeys.service. Oct 8 19:36:17.997230 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO Checking if agent identity type EC2 can be assumed Oct 8 19:36:18.065251 containerd[2143]: time="2024-10-08T19:36:18.064818647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.075476 containerd[2143]: time="2024-10-08T19:36:18.074913899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:36:18.075476 containerd[2143]: time="2024-10-08T19:36:18.075126119Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:36:18.075476 containerd[2143]: time="2024-10-08T19:36:18.075178631Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:36:18.076094 containerd[2143]: time="2024-10-08T19:36:18.076059479Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:36:18.076277 containerd[2143]: time="2024-10-08T19:36:18.076247651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.076512 containerd[2143]: time="2024-10-08T19:36:18.076479335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:36:18.076726 containerd[2143]: time="2024-10-08T19:36:18.076696895Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.078970 containerd[2143]: time="2024-10-08T19:36:18.078827195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:36:18.078970 containerd[2143]: time="2024-10-08T19:36:18.078906371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.079352 containerd[2143]: time="2024-10-08T19:36:18.078942755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:36:18.079352 containerd[2143]: time="2024-10-08T19:36:18.079215131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.081044 containerd[2143]: time="2024-10-08T19:36:18.080904347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.088478 containerd[2143]: time="2024-10-08T19:36:18.085073879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:36:18.088478 containerd[2143]: time="2024-10-08T19:36:18.087371627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:36:18.088478 containerd[2143]: time="2024-10-08T19:36:18.087414083Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:36:18.088478 containerd[2143]: time="2024-10-08T19:36:18.088282799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:36:18.088478 containerd[2143]: time="2024-10-08T19:36:18.088390883Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:36:18.095467 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO Agent will take identity from EC2 Oct 8 19:36:18.096520 containerd[2143]: time="2024-10-08T19:36:18.096423215Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:36:18.096668 containerd[2143]: time="2024-10-08T19:36:18.096639335Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:36:18.097304 containerd[2143]: time="2024-10-08T19:36:18.097270103Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:36:18.097757 containerd[2143]: time="2024-10-08T19:36:18.097726091Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:36:18.097879 containerd[2143]: time="2024-10-08T19:36:18.097850843Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:36:18.099504 containerd[2143]: time="2024-10-08T19:36:18.099322691Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.103526243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104654747Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104704367Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104740235Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104776811Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104813243Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104854415Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104893283Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104936279Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.104970311Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.105001823Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.105072959Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.105122003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106228 containerd[2143]: time="2024-10-08T19:36:18.105174167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105230315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105266927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105300503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105339647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105396419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105436895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105473435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105515939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105551687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105603851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.106863 containerd[2143]: time="2024-10-08T19:36:18.105636503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.108434339Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.112883831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.112944827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.112986563Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113140475Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113193671Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113234867Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113268311Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113302871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113359787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113396639Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:36:18.116469 containerd[2143]: time="2024-10-08T19:36:18.113424575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:36:18.117135 containerd[2143]: time="2024-10-08T19:36:18.113990435Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:36:18.117135 containerd[2143]: time="2024-10-08T19:36:18.114134507Z" level=info msg="Connect containerd service" Oct 8 19:36:18.117135 containerd[2143]: time="2024-10-08T19:36:18.114233435Z" level=info msg="using legacy CRI server" Oct 8 19:36:18.117135 containerd[2143]: time="2024-10-08T19:36:18.114252683Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:36:18.117135 containerd[2143]: time="2024-10-08T19:36:18.114422087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.123951347Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125525183Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125652959Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125739491Z" level=info msg="Start subscribing containerd event" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125799539Z" level=info msg="Start recovering state" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125917991Z" level=info msg="Start event monitor" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125941019Z" level=info msg="Start snapshots syncer" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125961371Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:36:18.126967 containerd[2143]: time="2024-10-08T19:36:18.125981087Z" level=info msg="Start streaming server" Oct 8 19:36:18.138151 containerd[2143]: time="2024-10-08T19:36:18.135705899Z" level=info msg="containerd successfully booted in 0.249822s" Oct 8 19:36:18.136385 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:36:18.197050 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:36:18.247684 dbus-daemon[2087]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 8 19:36:18.250158 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 8 19:36:18.258742 dbus-daemon[2087]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2169 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 8 19:36:18.276681 systemd[1]: Starting polkit.service - Authorization Manager... Oct 8 19:36:18.302159 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:36:18.384162 polkitd[2310]: Started polkitd version 121 Oct 8 19:36:18.402050 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:36:18.423453 polkitd[2310]: Loading rules from directory /etc/polkit-1/rules.d Oct 8 19:36:18.423601 polkitd[2310]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 8 19:36:18.433119 polkitd[2310]: Finished loading, compiling and executing 2 rules Oct 8 19:36:18.437994 dbus-daemon[2087]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 8 19:36:18.439288 systemd[1]: Started polkit.service - Authorization Manager. Oct 8 19:36:18.443673 polkitd[2310]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 8 19:36:18.502598 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 8 19:36:18.527810 systemd-hostnamed[2169]: Hostname set to (transient) Oct 8 19:36:18.529105 systemd-resolved[2024]: System hostname changed to 'ip-172-31-17-52'. Oct 8 19:36:18.604057 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Oct 8 19:36:18.700265 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] Starting Core Agent Oct 8 19:36:18.800177 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 8 19:36:18.902381 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [Registrar] Starting registrar module Oct 8 19:36:19.006336 amazon-ssm-agent[2178]: 2024-10-08 19:36:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 8 19:36:19.194947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:19.214741 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:36:19.249458 tar[2128]: linux-arm64/LICENSE Oct 8 19:36:19.249458 tar[2128]: linux-arm64/README.md Oct 8 19:36:19.292735 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:36:19.684151 amazon-ssm-agent[2178]: 2024-10-08 19:36:19 INFO [EC2Identity] EC2 registration was successful. Oct 8 19:36:19.719448 amazon-ssm-agent[2178]: 2024-10-08 19:36:19 INFO [CredentialRefresher] credentialRefresher has started Oct 8 19:36:19.719448 amazon-ssm-agent[2178]: 2024-10-08 19:36:19 INFO [CredentialRefresher] Starting credentials refresher loop Oct 8 19:36:19.719448 amazon-ssm-agent[2178]: 2024-10-08 19:36:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 8 19:36:19.725496 sshd_keygen[2151]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:36:19.781735 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:36:19.784195 amazon-ssm-agent[2178]: 2024-10-08 19:36:19 INFO [CredentialRefresher] Next credential rotation will be in 32.19165431296667 minutes Oct 8 19:36:19.795506 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:36:19.841795 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:36:19.843077 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:36:19.856569 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:36:19.891238 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:36:19.902831 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:36:19.909337 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:36:19.914590 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:36:19.918567 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:36:19.921895 systemd[1]: Startup finished in 10.283s (kernel) + 10.071s (userspace) = 20.354s. Oct 8 19:36:20.176861 kubelet[2355]: E1008 19:36:20.176760 2355 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:36:20.182666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:36:20.183107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:36:20.747873 amazon-ssm-agent[2178]: 2024-10-08 19:36:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 8 19:36:20.848379 amazon-ssm-agent[2178]: 2024-10-08 19:36:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2394) started Oct 8 19:36:20.948715 amazon-ssm-agent[2178]: 2024-10-08 19:36:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 8 19:36:24.294075 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:36:24.302544 systemd[1]: Started sshd@0-172.31.17.52:22-139.178.68.195:52416.service - OpenSSH per-connection server daemon (139.178.68.195:52416). Oct 8 19:36:24.482547 sshd[2404]: Accepted publickey for core from 139.178.68.195 port 52416 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:24.486383 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:24.501594 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:36:24.516441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:36:24.521237 systemd-logind[2113]: New session 1 of user core. Oct 8 19:36:24.542466 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:36:24.561568 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:36:24.566761 (systemd)[2410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:36:24.780926 systemd[2410]: Queued start job for default target default.target. Oct 8 19:36:24.781664 systemd[2410]: Created slice app.slice - User Application Slice. Oct 8 19:36:24.781704 systemd[2410]: Reached target paths.target - Paths. Oct 8 19:36:24.781736 systemd[2410]: Reached target timers.target - Timers. Oct 8 19:36:24.789250 systemd[2410]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:36:24.812761 systemd[2410]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:36:24.812878 systemd[2410]: Reached target sockets.target - Sockets. Oct 8 19:36:24.812911 systemd[2410]: Reached target basic.target - Basic System. Oct 8 19:36:24.812994 systemd[2410]: Reached target default.target - Main User Target. Oct 8 19:36:24.813219 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:36:24.815617 systemd[2410]: Startup finished in 237ms. Oct 8 19:36:24.827614 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:36:24.978649 systemd[1]: Started sshd@1-172.31.17.52:22-139.178.68.195:52418.service - OpenSSH per-connection server daemon (139.178.68.195:52418). Oct 8 19:36:25.155525 sshd[2422]: Accepted publickey for core from 139.178.68.195 port 52418 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:25.158573 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:25.167823 systemd-logind[2113]: New session 2 of user core. Oct 8 19:36:25.179610 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:36:25.306471 sshd[2422]: pam_unix(sshd:session): session closed for user core Oct 8 19:36:25.312600 systemd[1]: sshd@1-172.31.17.52:22-139.178.68.195:52418.service: Deactivated successfully. Oct 8 19:36:25.318392 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:36:25.319787 systemd-logind[2113]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:36:25.321781 systemd-logind[2113]: Removed session 2. Oct 8 19:36:25.337555 systemd[1]: Started sshd@2-172.31.17.52:22-139.178.68.195:52430.service - OpenSSH per-connection server daemon (139.178.68.195:52430). Oct 8 19:36:25.517222 sshd[2430]: Accepted publickey for core from 139.178.68.195 port 52430 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:25.520698 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:25.530390 systemd-logind[2113]: New session 3 of user core. Oct 8 19:36:25.540609 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:36:25.663452 sshd[2430]: pam_unix(sshd:session): session closed for user core Oct 8 19:36:25.671883 systemd[1]: sshd@2-172.31.17.52:22-139.178.68.195:52430.service: Deactivated successfully. Oct 8 19:36:25.677943 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:36:25.679870 systemd-logind[2113]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:36:25.681905 systemd-logind[2113]: Removed session 3. Oct 8 19:36:25.698584 systemd[1]: Started sshd@3-172.31.17.52:22-139.178.68.195:52432.service - OpenSSH per-connection server daemon (139.178.68.195:52432). Oct 8 19:36:25.864483 sshd[2438]: Accepted publickey for core from 139.178.68.195 port 52432 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:25.866441 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:25.874819 systemd-logind[2113]: New session 4 of user core. Oct 8 19:36:25.881503 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:36:26.009307 sshd[2438]: pam_unix(sshd:session): session closed for user core Oct 8 19:36:26.014438 systemd[1]: sshd@3-172.31.17.52:22-139.178.68.195:52432.service: Deactivated successfully. Oct 8 19:36:26.021450 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:36:26.023199 systemd-logind[2113]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:36:26.024876 systemd-logind[2113]: Removed session 4. Oct 8 19:36:26.043466 systemd[1]: Started sshd@4-172.31.17.52:22-139.178.68.195:52442.service - OpenSSH per-connection server daemon (139.178.68.195:52442). Oct 8 19:36:26.214479 sshd[2446]: Accepted publickey for core from 139.178.68.195 port 52442 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:26.216378 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:26.225216 systemd-logind[2113]: New session 5 of user core. Oct 8 19:36:26.232725 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:36:26.375482 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:36:26.376145 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:36:26.392144 sudo[2450]: pam_unix(sudo:session): session closed for user root Oct 8 19:36:26.417454 sshd[2446]: pam_unix(sshd:session): session closed for user core Oct 8 19:36:26.425520 systemd[1]: sshd@4-172.31.17.52:22-139.178.68.195:52442.service: Deactivated successfully. Oct 8 19:36:26.430742 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:36:26.432556 systemd-logind[2113]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:36:26.434389 systemd-logind[2113]: Removed session 5. Oct 8 19:36:26.448520 systemd[1]: Started sshd@5-172.31.17.52:22-139.178.68.195:52448.service - OpenSSH per-connection server daemon (139.178.68.195:52448). Oct 8 19:36:26.615424 sshd[2455]: Accepted publickey for core from 139.178.68.195 port 52448 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:26.617703 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:26.626163 systemd-logind[2113]: New session 6 of user core. Oct 8 19:36:26.635708 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:36:26.743463 sudo[2460]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:36:26.744846 sudo[2460]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:36:26.752499 sudo[2460]: pam_unix(sudo:session): session closed for user root Oct 8 19:36:26.763138 sudo[2459]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:36:26.763806 sudo[2459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:36:26.792559 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:36:26.796196 auditctl[2463]: No rules Oct 8 19:36:26.797127 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:36:26.797631 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:36:26.806858 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:36:26.863715 augenrules[2482]: No rules Oct 8 19:36:26.867556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:36:26.871406 sudo[2459]: pam_unix(sudo:session): session closed for user root Oct 8 19:36:26.898305 sshd[2455]: pam_unix(sshd:session): session closed for user core Oct 8 19:36:26.903706 systemd-logind[2113]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:36:26.905372 systemd[1]: sshd@5-172.31.17.52:22-139.178.68.195:52448.service: Deactivated successfully. Oct 8 19:36:26.912303 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:36:26.915447 systemd-logind[2113]: Removed session 6. Oct 8 19:36:26.924663 systemd[1]: Started sshd@6-172.31.17.52:22-139.178.68.195:52454.service - OpenSSH per-connection server daemon (139.178.68.195:52454). Oct 8 19:36:27.097457 sshd[2491]: Accepted publickey for core from 139.178.68.195 port 52454 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:36:27.100095 sshd[2491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:36:27.110205 systemd-logind[2113]: New session 7 of user core. Oct 8 19:36:27.116612 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:36:27.223942 sudo[2495]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:36:27.224741 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:36:27.808682 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:36:27.811055 (dockerd)[2510]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:36:28.247794 dockerd[2510]: time="2024-10-08T19:36:28.247128002Z" level=info msg="Starting up" Oct 8 19:36:28.389071 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport480758327-merged.mount: Deactivated successfully. Oct 8 19:36:29.241164 dockerd[2510]: time="2024-10-08T19:36:29.241084158Z" level=info msg="Loading containers: start." Oct 8 19:36:29.445134 kernel: Initializing XFRM netlink socket Oct 8 19:36:29.475961 (udev-worker)[2531]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:36:29.560920 systemd-networkd[1685]: docker0: Link UP Oct 8 19:36:29.586743 dockerd[2510]: time="2024-10-08T19:36:29.586656511Z" level=info msg="Loading containers: done." Oct 8 19:36:29.613952 dockerd[2510]: time="2024-10-08T19:36:29.613873164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:36:29.614193 dockerd[2510]: time="2024-10-08T19:36:29.614110501Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:36:29.614446 dockerd[2510]: time="2024-10-08T19:36:29.614379310Z" level=info msg="Daemon has completed initialization" Oct 8 19:36:29.689794 dockerd[2510]: time="2024-10-08T19:36:29.689713901Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:36:29.690370 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:36:30.231221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:36:30.243478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:30.858994 containerd[2143]: time="2024-10-08T19:36:30.858556910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:36:30.894238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:30.913698 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:36:31.006904 kubelet[2668]: E1008 19:36:31.006324 2668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:36:31.015204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:36:31.015806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:36:31.555548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268553111.mount: Deactivated successfully. Oct 8 19:36:33.497565 containerd[2143]: time="2024-10-08T19:36:33.497464108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:33.499921 containerd[2143]: time="2024-10-08T19:36:33.499839616Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286058" Oct 8 19:36:33.501085 containerd[2143]: time="2024-10-08T19:36:33.500926056Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:33.506954 containerd[2143]: time="2024-10-08T19:36:33.506824778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:33.510153 containerd[2143]: time="2024-10-08T19:36:33.509331464Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.650672653s" Oct 8 19:36:33.510153 containerd[2143]: time="2024-10-08T19:36:33.509436447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 19:36:33.552763 containerd[2143]: time="2024-10-08T19:36:33.552332976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:36:35.779145 containerd[2143]: time="2024-10-08T19:36:35.779061902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:35.781233 containerd[2143]: time="2024-10-08T19:36:35.781177211Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374204" Oct 8 19:36:35.782708 containerd[2143]: time="2024-10-08T19:36:35.782587466Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:35.790362 containerd[2143]: time="2024-10-08T19:36:35.790269767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:35.793209 containerd[2143]: time="2024-10-08T19:36:35.792876771Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 2.240456118s" Oct 8 19:36:35.793209 containerd[2143]: time="2024-10-08T19:36:35.792944405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 19:36:35.837236 containerd[2143]: time="2024-10-08T19:36:35.837141707Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:36:36.993062 containerd[2143]: time="2024-10-08T19:36:36.991378088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:36.994927 containerd[2143]: time="2024-10-08T19:36:36.994883742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751217" Oct 8 19:36:36.997402 containerd[2143]: time="2024-10-08T19:36:36.997357924Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:37.002442 containerd[2143]: time="2024-10-08T19:36:37.002392694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:37.004876 containerd[2143]: time="2024-10-08T19:36:37.004826708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.167628125s" Oct 8 19:36:37.005084 containerd[2143]: time="2024-10-08T19:36:37.005017593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 19:36:37.043746 containerd[2143]: time="2024-10-08T19:36:37.043631412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:36:38.444257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635574767.mount: Deactivated successfully. Oct 8 19:36:39.081685 containerd[2143]: time="2024-10-08T19:36:39.081626927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:39.083661 containerd[2143]: time="2024-10-08T19:36:39.083606692Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254038" Oct 8 19:36:39.084740 containerd[2143]: time="2024-10-08T19:36:39.084653984Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:39.089462 containerd[2143]: time="2024-10-08T19:36:39.089405767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:39.090606 containerd[2143]: time="2024-10-08T19:36:39.090412856Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 2.046441761s" Oct 8 19:36:39.090606 containerd[2143]: time="2024-10-08T19:36:39.090466997Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 19:36:39.128113 containerd[2143]: time="2024-10-08T19:36:39.127959473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:36:39.715749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044409285.mount: Deactivated successfully. Oct 8 19:36:41.107210 containerd[2143]: time="2024-10-08T19:36:41.107126732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:41.148463 containerd[2143]: time="2024-10-08T19:36:41.148378632Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Oct 8 19:36:41.185342 containerd[2143]: time="2024-10-08T19:36:41.185254797Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:41.228920 containerd[2143]: time="2024-10-08T19:36:41.228802888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:41.231484 containerd[2143]: time="2024-10-08T19:36:41.231284123Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.103261525s" Oct 8 19:36:41.231484 containerd[2143]: time="2024-10-08T19:36:41.231344069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:36:41.266101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:36:41.277903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:41.282255 containerd[2143]: time="2024-10-08T19:36:41.280637238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:36:42.629294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583666062.mount: Deactivated successfully. Oct 8 19:36:42.701605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:42.717629 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:36:42.784229 containerd[2143]: time="2024-10-08T19:36:42.783857305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:42.813725 containerd[2143]: time="2024-10-08T19:36:42.813658616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Oct 8 19:36:42.817878 kubelet[2829]: E1008 19:36:42.817780 2829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:36:42.823412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:36:42.823997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:36:42.832091 containerd[2143]: time="2024-10-08T19:36:42.831948483Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:42.884195 containerd[2143]: time="2024-10-08T19:36:42.883955259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:42.886886 containerd[2143]: time="2024-10-08T19:36:42.886481591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.604289535s" Oct 8 19:36:42.886886 containerd[2143]: time="2024-10-08T19:36:42.886538994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:36:42.928237 containerd[2143]: time="2024-10-08T19:36:42.928093119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:36:43.807209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135675420.mount: Deactivated successfully. Oct 8 19:36:46.568455 containerd[2143]: time="2024-10-08T19:36:46.568375794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:46.571342 containerd[2143]: time="2024-10-08T19:36:46.571290720Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Oct 8 19:36:46.572052 containerd[2143]: time="2024-10-08T19:36:46.571897712Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:46.579828 containerd[2143]: time="2024-10-08T19:36:46.579717128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:46.582342 containerd[2143]: time="2024-10-08T19:36:46.582283185Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.654134989s" Oct 8 19:36:46.582627 containerd[2143]: time="2024-10-08T19:36:46.582485991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 19:36:48.556940 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 8 19:36:52.701767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:52.714632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:52.761696 systemd[1]: Reloading requested from client PID 2958 ('systemctl') (unit session-7.scope)... Oct 8 19:36:52.761731 systemd[1]: Reloading... Oct 8 19:36:52.925059 zram_generator::config[2998]: No configuration found. Oct 8 19:36:53.211951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:36:53.373865 systemd[1]: Reloading finished in 611 ms. Oct 8 19:36:53.448821 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:36:53.449062 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:36:53.449686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:53.460842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:54.133382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:54.150734 (kubelet)[3071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:36:54.239668 kubelet[3071]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:36:54.239668 kubelet[3071]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:36:54.239668 kubelet[3071]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:36:54.240285 kubelet[3071]: I1008 19:36:54.239756 3071 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:36:56.611216 kubelet[3071]: I1008 19:36:56.611174 3071 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:36:56.611827 kubelet[3071]: I1008 19:36:56.611804 3071 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:36:56.613005 kubelet[3071]: I1008 19:36:56.612925 3071 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:36:56.650407 kubelet[3071]: E1008 19:36:56.650360 3071 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.650687 kubelet[3071]: I1008 19:36:56.650634 3071 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:36:56.665389 kubelet[3071]: I1008 19:36:56.665350 3071 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:36:56.668348 kubelet[3071]: I1008 19:36:56.668310 3071 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:36:56.668813 kubelet[3071]: I1008 19:36:56.668784 3071 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:36:56.669072 kubelet[3071]: I1008 19:36:56.669052 3071 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:36:56.669185 kubelet[3071]: I1008 19:36:56.669164 3071 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:36:56.671650 kubelet[3071]: I1008 19:36:56.671623 3071 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:36:56.675939 kubelet[3071]: I1008 19:36:56.675908 3071 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:36:56.676387 kubelet[3071]: I1008 19:36:56.676114 3071 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:36:56.676387 kubelet[3071]: I1008 19:36:56.676170 3071 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:36:56.676387 kubelet[3071]: I1008 19:36:56.676201 3071 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:36:56.679358 kubelet[3071]: W1008 19:36:56.679269 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.17.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-52&limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.679829 kubelet[3071]: E1008 19:36:56.679581 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-52&limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.679829 kubelet[3071]: W1008 19:36:56.679729 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.17.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.679829 kubelet[3071]: E1008 19:36:56.679799 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.680214 kubelet[3071]: I1008 19:36:56.680188 3071 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:36:56.680817 kubelet[3071]: I1008 19:36:56.680789 3071 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:36:56.684077 kubelet[3071]: W1008 19:36:56.682267 3071 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:36:56.684077 kubelet[3071]: I1008 19:36:56.683815 3071 server.go:1256] "Started kubelet" Oct 8 19:36:56.686741 kubelet[3071]: I1008 19:36:56.686700 3071 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:36:56.688191 kubelet[3071]: I1008 19:36:56.688140 3071 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:36:56.689415 kubelet[3071]: I1008 19:36:56.689379 3071 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:36:56.689931 kubelet[3071]: I1008 19:36:56.689908 3071 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:36:56.693908 kubelet[3071]: E1008 19:36:56.693869 3071 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.52:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.52:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-52.17fc91643b75dc7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-52,UID:ip-172-31-17-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-52,},FirstTimestamp:2024-10-08 19:36:56.683764863 +0000 UTC m=+2.526206837,LastTimestamp:2024-10-08 19:36:56.683764863 +0000 UTC m=+2.526206837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-52,}" Oct 8 19:36:56.694188 kubelet[3071]: I1008 19:36:56.694109 3071 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:36:56.696612 kubelet[3071]: I1008 19:36:56.696572 3071 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:36:56.701919 kubelet[3071]: I1008 19:36:56.701860 3071 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:36:56.702403 kubelet[3071]: I1008 19:36:56.702377 3071 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:36:56.704358 kubelet[3071]: W1008 19:36:56.704259 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.704635 kubelet[3071]: E1008 19:36:56.704609 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.709004 kubelet[3071]: I1008 19:36:56.708932 3071 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:36:56.709269 kubelet[3071]: I1008 19:36:56.709226 3071 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:36:56.709867 kubelet[3071]: E1008 19:36:56.709824 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-52?timeout=10s\": dial tcp 172.31.17.52:6443: connect: connection refused" interval="200ms" Oct 8 19:36:56.714099 kubelet[3071]: I1008 19:36:56.713855 3071 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:36:56.744626 kubelet[3071]: I1008 19:36:56.744380 3071 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:36:56.757639 kubelet[3071]: I1008 19:36:56.757117 3071 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:36:56.757639 kubelet[3071]: I1008 19:36:56.757162 3071 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:36:56.757639 kubelet[3071]: I1008 19:36:56.757197 3071 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:36:56.757639 kubelet[3071]: E1008 19:36:56.757275 3071 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:36:56.762915 kubelet[3071]: W1008 19:36:56.762803 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.763154 kubelet[3071]: E1008 19:36:56.762925 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:56.780499 kubelet[3071]: I1008 19:36:56.780452 3071 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:36:56.781430 kubelet[3071]: I1008 19:36:56.781002 3071 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:36:56.781430 kubelet[3071]: I1008 19:36:56.781114 3071 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:36:56.784290 kubelet[3071]: I1008 19:36:56.784147 3071 policy_none.go:49] "None policy: Start" Oct 8 19:36:56.785255 kubelet[3071]: I1008 19:36:56.785208 3071 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:36:56.785389 kubelet[3071]: I1008 19:36:56.785304 3071 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:36:56.794708 kubelet[3071]: I1008 19:36:56.794648 3071 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:36:56.795122 kubelet[3071]: I1008 19:36:56.795080 3071 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:36:56.804392 kubelet[3071]: I1008 19:36:56.804353 3071 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-52" Oct 8 19:36:56.806963 kubelet[3071]: E1008 19:36:56.806712 3071 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.52:6443/api/v1/nodes\": dial tcp 172.31.17.52:6443: connect: connection refused" node="ip-172-31-17-52" Oct 8 19:36:56.807701 kubelet[3071]: E1008 19:36:56.807654 3071 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-52\" not found" Oct 8 19:36:56.858780 kubelet[3071]: I1008 19:36:56.858209 3071 topology_manager.go:215] "Topology Admit Handler" podUID="cfecdda3b68161c28b64bc19195af97f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-52" Oct 8 19:36:56.862520 kubelet[3071]: I1008 19:36:56.861393 3071 topology_manager.go:215] "Topology Admit Handler" podUID="9fdafbbb6411bb060ada4c6a32d26557" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-52" Oct 8 19:36:56.867669 kubelet[3071]: I1008 19:36:56.867473 3071 topology_manager.go:215] "Topology Admit Handler" podUID="9d35ca615d0fd3cbb0b757b8be99294e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-52" Oct 8 19:36:56.904106 kubelet[3071]: I1008 19:36:56.904011 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:36:56.904106 kubelet[3071]: I1008 19:36:56.904103 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:36:56.904473 kubelet[3071]: I1008 19:36:56.904151 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d35ca615d0fd3cbb0b757b8be99294e-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-52\" (UID: \"9d35ca615d0fd3cbb0b757b8be99294e\") " pod="kube-system/kube-scheduler-ip-172-31-17-52" Oct 8 19:36:56.904473 kubelet[3071]: I1008 19:36:56.904195 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfecdda3b68161c28b64bc19195af97f-ca-certs\") pod \"kube-apiserver-ip-172-31-17-52\" (UID: \"cfecdda3b68161c28b64bc19195af97f\") " pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:36:56.904473 kubelet[3071]: I1008 19:36:56.904242 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfecdda3b68161c28b64bc19195af97f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-52\" (UID: \"cfecdda3b68161c28b64bc19195af97f\") " pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:36:56.904473 kubelet[3071]: I1008 19:36:56.904315 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:36:56.904903 kubelet[3071]: I1008 19:36:56.904761 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:36:56.904903 kubelet[3071]: I1008 19:36:56.904837 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:36:56.904903 kubelet[3071]: I1008 19:36:56.904902 3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfecdda3b68161c28b64bc19195af97f-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-52\" (UID: \"cfecdda3b68161c28b64bc19195af97f\") " pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:36:56.910611 kubelet[3071]: E1008 19:36:56.910582 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-52?timeout=10s\": dial tcp 172.31.17.52:6443: connect: connection refused" interval="400ms" Oct 8 19:36:57.009238 kubelet[3071]: I1008 19:36:57.009140 3071 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-52" Oct 8 19:36:57.009748 kubelet[3071]: E1008 19:36:57.009708 3071 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.52:6443/api/v1/nodes\": dial tcp 172.31.17.52:6443: connect: connection refused" node="ip-172-31-17-52" Oct 8 19:36:57.182249 containerd[2143]: time="2024-10-08T19:36:57.182059673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-52,Uid:cfecdda3b68161c28b64bc19195af97f,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:57.186766 containerd[2143]: time="2024-10-08T19:36:57.186384469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-52,Uid:9d35ca615d0fd3cbb0b757b8be99294e,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:57.186766 containerd[2143]: time="2024-10-08T19:36:57.186455678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-52,Uid:9fdafbbb6411bb060ada4c6a32d26557,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:57.311824 kubelet[3071]: E1008 19:36:57.311724 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-52?timeout=10s\": dial tcp 172.31.17.52:6443: connect: connection refused" interval="800ms" Oct 8 19:36:57.412819 kubelet[3071]: I1008 19:36:57.412777 3071 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-52" Oct 8 19:36:57.413979 kubelet[3071]: E1008 19:36:57.413939 3071 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.52:6443/api/v1/nodes\": dial tcp 172.31.17.52:6443: connect: connection refused" node="ip-172-31-17-52" Oct 8 19:36:57.800477 kubelet[3071]: W1008 19:36:57.800378 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.17.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-52&limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:57.800477 kubelet[3071]: E1008 19:36:57.800481 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-52&limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:57.953235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994290363.mount: Deactivated successfully. Oct 8 19:36:57.963083 containerd[2143]: time="2024-10-08T19:36:57.962785285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:57.967861 containerd[2143]: time="2024-10-08T19:36:57.967806537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Oct 8 19:36:57.970103 containerd[2143]: time="2024-10-08T19:36:57.969303436Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:57.970975 containerd[2143]: time="2024-10-08T19:36:57.970903736Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:57.973486 containerd[2143]: time="2024-10-08T19:36:57.973412449Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:57.975657 containerd[2143]: time="2024-10-08T19:36:57.975498325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:36:57.975657 containerd[2143]: time="2024-10-08T19:36:57.975565972Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:36:57.980567 containerd[2143]: time="2024-10-08T19:36:57.980418552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:57.985332 containerd[2143]: time="2024-10-08T19:36:57.984955714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 798.449674ms" Oct 8 19:36:57.989571 containerd[2143]: time="2024-10-08T19:36:57.989478904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 807.277294ms" Oct 8 19:36:57.990676 containerd[2143]: time="2024-10-08T19:36:57.990508696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 803.955016ms" Oct 8 19:36:58.113735 kubelet[3071]: E1008 19:36:58.113567 3071 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-52?timeout=10s\": dial tcp 172.31.17.52:6443: connect: connection refused" interval="1.6s" Oct 8 19:36:58.125670 kubelet[3071]: W1008 19:36:58.125564 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.17.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.125670 kubelet[3071]: E1008 19:36:58.125675 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.127859 kubelet[3071]: W1008 19:36:58.127699 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.127859 kubelet[3071]: E1008 19:36:58.127789 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.219255 kubelet[3071]: I1008 19:36:58.218683 3071 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-52" Oct 8 19:36:58.219255 kubelet[3071]: E1008 19:36:58.219217 3071 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.52:6443/api/v1/nodes\": dial tcp 172.31.17.52:6443: connect: connection refused" node="ip-172-31-17-52" Oct 8 19:36:58.232384 kubelet[3071]: W1008 19:36:58.232279 3071 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.232384 kubelet[3071]: E1008 19:36:58.232347 3071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.249700 containerd[2143]: time="2024-10-08T19:36:58.249366300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:58.249700 containerd[2143]: time="2024-10-08T19:36:58.249460033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:58.249700 containerd[2143]: time="2024-10-08T19:36:58.249502324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:58.250815 containerd[2143]: time="2024-10-08T19:36:58.249658426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:58.253899 containerd[2143]: time="2024-10-08T19:36:58.253369705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:58.253899 containerd[2143]: time="2024-10-08T19:36:58.253466688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:58.253899 containerd[2143]: time="2024-10-08T19:36:58.253504014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:58.253899 containerd[2143]: time="2024-10-08T19:36:58.253669411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:58.260069 containerd[2143]: time="2024-10-08T19:36:58.259317170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:58.260069 containerd[2143]: time="2024-10-08T19:36:58.259405062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:58.260069 containerd[2143]: time="2024-10-08T19:36:58.259462165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:58.260768 containerd[2143]: time="2024-10-08T19:36:58.260000864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:58.432554 containerd[2143]: time="2024-10-08T19:36:58.432391370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-52,Uid:9fdafbbb6411bb060ada4c6a32d26557,Namespace:kube-system,Attempt:0,} returns sandbox id \"b90122a56610340c936e9dac29f2ef05e1390e7215b4071c6ec00e0a3a60eb0b\"" Oct 8 19:36:58.440232 containerd[2143]: time="2024-10-08T19:36:58.440179170Z" level=info msg="CreateContainer within sandbox \"b90122a56610340c936e9dac29f2ef05e1390e7215b4071c6ec00e0a3a60eb0b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:36:58.456914 containerd[2143]: time="2024-10-08T19:36:58.456796233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-52,Uid:cfecdda3b68161c28b64bc19195af97f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6ef6467689689faf1e662d8e129e8c6adcff6ec5f841b5a94f4f35d87b42e19\"" Oct 8 19:36:58.461148 containerd[2143]: time="2024-10-08T19:36:58.460894967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-52,Uid:9d35ca615d0fd3cbb0b757b8be99294e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab05dfe3f4fe83431eb6e64ed55cdd437b1283a5819ecd57b97a3216ae665a63\"" Oct 8 19:36:58.465466 containerd[2143]: time="2024-10-08T19:36:58.465268555Z" level=info msg="CreateContainer within sandbox \"f6ef6467689689faf1e662d8e129e8c6adcff6ec5f841b5a94f4f35d87b42e19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:36:58.468862 containerd[2143]: time="2024-10-08T19:36:58.468532494Z" level=info msg="CreateContainer within sandbox \"ab05dfe3f4fe83431eb6e64ed55cdd437b1283a5819ecd57b97a3216ae665a63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:36:58.484323 containerd[2143]: time="2024-10-08T19:36:58.483993960Z" level=info msg="CreateContainer within sandbox \"b90122a56610340c936e9dac29f2ef05e1390e7215b4071c6ec00e0a3a60eb0b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696\"" Oct 8 19:36:58.485847 containerd[2143]: time="2024-10-08T19:36:58.485795435Z" level=info msg="StartContainer for \"353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696\"" Oct 8 19:36:58.523635 containerd[2143]: time="2024-10-08T19:36:58.523549884Z" level=info msg="CreateContainer within sandbox \"f6ef6467689689faf1e662d8e129e8c6adcff6ec5f841b5a94f4f35d87b42e19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"80474db35f439aaba1e561bfa9a8e47bd80e9c9252d350efd88fb610ec0e6bb0\"" Oct 8 19:36:58.524585 containerd[2143]: time="2024-10-08T19:36:58.524543455Z" level=info msg="StartContainer for \"80474db35f439aaba1e561bfa9a8e47bd80e9c9252d350efd88fb610ec0e6bb0\"" Oct 8 19:36:58.531418 containerd[2143]: time="2024-10-08T19:36:58.531341342Z" level=info msg="CreateContainer within sandbox \"ab05dfe3f4fe83431eb6e64ed55cdd437b1283a5819ecd57b97a3216ae665a63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed\"" Oct 8 19:36:58.532213 containerd[2143]: time="2024-10-08T19:36:58.532139015Z" level=info msg="StartContainer for \"985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed\"" Oct 8 19:36:58.684149 containerd[2143]: time="2024-10-08T19:36:58.682246756Z" level=info msg="StartContainer for \"353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696\" returns successfully" Oct 8 19:36:58.718675 kubelet[3071]: E1008 19:36:58.718407 3071 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.52:6443: connect: connection refused Oct 8 19:36:58.750690 containerd[2143]: time="2024-10-08T19:36:58.750436857Z" level=info msg="StartContainer for \"985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed\" returns successfully" Oct 8 19:36:58.842869 containerd[2143]: time="2024-10-08T19:36:58.842757865Z" level=info msg="StartContainer for \"80474db35f439aaba1e561bfa9a8e47bd80e9c9252d350efd88fb610ec0e6bb0\" returns successfully" Oct 8 19:36:59.823213 kubelet[3071]: I1008 19:36:59.823180 3071 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-52" Oct 8 19:37:01.950286 kubelet[3071]: E1008 19:37:01.950174 3071 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-52\" not found" node="ip-172-31-17-52" Oct 8 19:37:01.986155 kubelet[3071]: I1008 19:37:01.984379 3071 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-52" Oct 8 19:37:02.567634 update_engine[2116]: I20241008 19:37:02.567552 2116 update_attempter.cc:509] Updating boot flags... Oct 8 19:37:02.681963 kubelet[3071]: I1008 19:37:02.681835 3071 apiserver.go:52] "Watching apiserver" Oct 8 19:37:02.703382 kubelet[3071]: I1008 19:37:02.703177 3071 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:37:02.750406 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3355) Oct 8 19:37:03.225332 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3355) Oct 8 19:37:05.338261 systemd[1]: Reloading requested from client PID 3525 ('systemctl') (unit session-7.scope)... Oct 8 19:37:05.338298 systemd[1]: Reloading... Oct 8 19:37:05.508236 zram_generator::config[3565]: No configuration found. Oct 8 19:37:05.762702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:37:05.941270 systemd[1]: Reloading finished in 602 ms. Oct 8 19:37:06.001875 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:37:06.017099 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:37:06.017837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:37:06.026688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:37:06.430446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:37:06.454824 (kubelet)[3635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:37:06.583557 kubelet[3635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:37:06.583557 kubelet[3635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:37:06.583557 kubelet[3635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:37:06.584143 kubelet[3635]: I1008 19:37:06.583684 3635 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:37:06.594274 kubelet[3635]: I1008 19:37:06.594221 3635 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:37:06.594274 kubelet[3635]: I1008 19:37:06.594269 3635 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:37:06.595333 kubelet[3635]: I1008 19:37:06.594739 3635 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:37:06.599177 kubelet[3635]: I1008 19:37:06.599131 3635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:37:06.603609 kubelet[3635]: I1008 19:37:06.603194 3635 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:37:06.622223 kubelet[3635]: I1008 19:37:06.622173 3635 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:37:06.622995 kubelet[3635]: I1008 19:37:06.622941 3635 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623287 3635 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623338 3635 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623361 3635 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623417 3635 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623580 3635 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623604 3635 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:37:06.623907 kubelet[3635]: I1008 19:37:06.623641 3635 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:37:06.624495 kubelet[3635]: I1008 19:37:06.623671 3635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:37:06.638142 kubelet[3635]: I1008 19:37:06.637557 3635 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:37:06.638142 kubelet[3635]: I1008 19:37:06.637881 3635 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:37:06.639726 kubelet[3635]: I1008 19:37:06.639422 3635 server.go:1256] "Started kubelet" Oct 8 19:37:06.653924 kubelet[3635]: I1008 19:37:06.653869 3635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:37:06.663120 kubelet[3635]: I1008 19:37:06.662841 3635 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:37:06.666966 kubelet[3635]: I1008 19:37:06.665421 3635 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:37:06.668834 kubelet[3635]: I1008 19:37:06.668783 3635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:37:06.669556 kubelet[3635]: I1008 19:37:06.669291 3635 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:37:06.673726 kubelet[3635]: I1008 19:37:06.672701 3635 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:37:06.673726 kubelet[3635]: I1008 19:37:06.673353 3635 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:37:06.673726 kubelet[3635]: I1008 19:37:06.673722 3635 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:37:06.683673 kubelet[3635]: I1008 19:37:06.681707 3635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:37:06.704245 kubelet[3635]: I1008 19:37:06.704184 3635 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:37:06.704245 kubelet[3635]: I1008 19:37:06.704218 3635 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:37:06.763948 kubelet[3635]: I1008 19:37:06.762278 3635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:37:06.766001 kubelet[3635]: I1008 19:37:06.765382 3635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:37:06.766001 kubelet[3635]: I1008 19:37:06.765431 3635 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:37:06.766001 kubelet[3635]: I1008 19:37:06.765461 3635 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:37:06.766001 kubelet[3635]: E1008 19:37:06.765559 3635 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:37:06.801552 kubelet[3635]: I1008 19:37:06.801259 3635 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-52" Oct 8 19:37:06.819137 kubelet[3635]: I1008 19:37:06.818699 3635 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-52" Oct 8 19:37:06.819137 kubelet[3635]: I1008 19:37:06.818914 3635 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-52" Oct 8 19:37:06.865951 kubelet[3635]: E1008 19:37:06.865870 3635 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:37:06.911222 kubelet[3635]: I1008 19:37:06.910622 3635 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:37:06.911222 kubelet[3635]: I1008 19:37:06.910662 3635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:37:06.911222 kubelet[3635]: I1008 19:37:06.910741 3635 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:37:06.911222 kubelet[3635]: I1008 19:37:06.910970 3635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:37:06.911222 kubelet[3635]: I1008 19:37:06.911021 3635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:37:06.911222 kubelet[3635]: I1008 19:37:06.911087 3635 policy_none.go:49] "None policy: Start" Oct 8 19:37:06.914021 kubelet[3635]: I1008 19:37:06.912870 3635 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:37:06.914021 kubelet[3635]: I1008 19:37:06.913206 3635 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:37:06.914021 kubelet[3635]: I1008 19:37:06.913443 3635 state_mem.go:75] "Updated machine memory state" Oct 8 19:37:06.916048 kubelet[3635]: I1008 19:37:06.915991 3635 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:37:06.920379 kubelet[3635]: I1008 19:37:06.919711 3635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:37:07.067204 kubelet[3635]: I1008 19:37:07.067158 3635 topology_manager.go:215] "Topology Admit Handler" podUID="cfecdda3b68161c28b64bc19195af97f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-52" Oct 8 19:37:07.067648 kubelet[3635]: I1008 19:37:07.067509 3635 topology_manager.go:215] "Topology Admit Handler" podUID="9fdafbbb6411bb060ada4c6a32d26557" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.070662 kubelet[3635]: I1008 19:37:07.067817 3635 topology_manager.go:215] "Topology Admit Handler" podUID="9d35ca615d0fd3cbb0b757b8be99294e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-52" Oct 8 19:37:07.081057 kubelet[3635]: I1008 19:37:07.079851 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfecdda3b68161c28b64bc19195af97f-ca-certs\") pod \"kube-apiserver-ip-172-31-17-52\" (UID: \"cfecdda3b68161c28b64bc19195af97f\") " pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:37:07.081057 kubelet[3635]: I1008 19:37:07.079926 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfecdda3b68161c28b64bc19195af97f-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-52\" (UID: \"cfecdda3b68161c28b64bc19195af97f\") " pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:37:07.081057 kubelet[3635]: I1008 19:37:07.079974 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfecdda3b68161c28b64bc19195af97f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-52\" (UID: \"cfecdda3b68161c28b64bc19195af97f\") " pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:37:07.081057 kubelet[3635]: I1008 19:37:07.080017 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.081057 kubelet[3635]: I1008 19:37:07.080089 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.081424 kubelet[3635]: I1008 19:37:07.080133 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.082526 kubelet[3635]: I1008 19:37:07.081610 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.082526 kubelet[3635]: I1008 19:37:07.081686 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fdafbbb6411bb060ada4c6a32d26557-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-52\" (UID: \"9fdafbbb6411bb060ada4c6a32d26557\") " pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.082526 kubelet[3635]: I1008 19:37:07.081761 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d35ca615d0fd3cbb0b757b8be99294e-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-52\" (UID: \"9d35ca615d0fd3cbb0b757b8be99294e\") " pod="kube-system/kube-scheduler-ip-172-31-17-52" Oct 8 19:37:07.624928 kubelet[3635]: I1008 19:37:07.624867 3635 apiserver.go:52] "Watching apiserver" Oct 8 19:37:07.674599 kubelet[3635]: I1008 19:37:07.674484 3635 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:37:07.852347 kubelet[3635]: E1008 19:37:07.852294 3635 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-52\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-52" Oct 8 19:37:07.853317 kubelet[3635]: E1008 19:37:07.853272 3635 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-52\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-52" Oct 8 19:37:07.928774 kubelet[3635]: I1008 19:37:07.928603 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-52" podStartSLOduration=0.928539597 podStartE2EDuration="928.539597ms" podCreationTimestamp="2024-10-08 19:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:07.901870266 +0000 UTC m=+1.435727718" watchObservedRunningTime="2024-10-08 19:37:07.928539597 +0000 UTC m=+1.462397013" Oct 8 19:37:07.964165 kubelet[3635]: I1008 19:37:07.964099 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-52" podStartSLOduration=0.963965243 podStartE2EDuration="963.965243ms" podCreationTimestamp="2024-10-08 19:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:07.929147741 +0000 UTC m=+1.463005193" watchObservedRunningTime="2024-10-08 19:37:07.963965243 +0000 UTC m=+1.497822683" Oct 8 19:37:08.974066 kubelet[3635]: I1008 19:37:08.973995 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-52" podStartSLOduration=1.9739376709999998 podStartE2EDuration="1.973937671s" podCreationTimestamp="2024-10-08 19:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:07.967850819 +0000 UTC m=+1.501708319" watchObservedRunningTime="2024-10-08 19:37:08.973937671 +0000 UTC m=+2.507795123" Oct 8 19:37:13.163621 sudo[2495]: pam_unix(sudo:session): session closed for user root Oct 8 19:37:13.186883 sshd[2491]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:13.194674 systemd[1]: sshd@6-172.31.17.52:22-139.178.68.195:52454.service: Deactivated successfully. Oct 8 19:37:13.203241 systemd-logind[2113]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:37:13.204817 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:37:13.207796 systemd-logind[2113]: Removed session 7. Oct 8 19:37:20.488085 kubelet[3635]: I1008 19:37:20.487011 3635 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:37:20.492350 containerd[2143]: time="2024-10-08T19:37:20.492106844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:37:20.495917 kubelet[3635]: I1008 19:37:20.492564 3635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:37:20.828725 kubelet[3635]: I1008 19:37:20.828557 3635 topology_manager.go:215] "Topology Admit Handler" podUID="927d8614-6b62-4cbc-b36c-3bafec5bde7b" podNamespace="kube-system" podName="kube-proxy-zbxh7" Oct 8 19:37:20.877960 kubelet[3635]: I1008 19:37:20.877849 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/927d8614-6b62-4cbc-b36c-3bafec5bde7b-kube-proxy\") pod \"kube-proxy-zbxh7\" (UID: \"927d8614-6b62-4cbc-b36c-3bafec5bde7b\") " pod="kube-system/kube-proxy-zbxh7" Oct 8 19:37:20.878302 kubelet[3635]: I1008 19:37:20.878267 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/927d8614-6b62-4cbc-b36c-3bafec5bde7b-xtables-lock\") pod \"kube-proxy-zbxh7\" (UID: \"927d8614-6b62-4cbc-b36c-3bafec5bde7b\") " pod="kube-system/kube-proxy-zbxh7" Oct 8 19:37:20.878802 kubelet[3635]: I1008 19:37:20.878554 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/927d8614-6b62-4cbc-b36c-3bafec5bde7b-lib-modules\") pod \"kube-proxy-zbxh7\" (UID: \"927d8614-6b62-4cbc-b36c-3bafec5bde7b\") " pod="kube-system/kube-proxy-zbxh7" Oct 8 19:37:20.879045 kubelet[3635]: I1008 19:37:20.878960 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdnm7\" (UniqueName: \"kubernetes.io/projected/927d8614-6b62-4cbc-b36c-3bafec5bde7b-kube-api-access-qdnm7\") pod \"kube-proxy-zbxh7\" (UID: \"927d8614-6b62-4cbc-b36c-3bafec5bde7b\") " pod="kube-system/kube-proxy-zbxh7" Oct 8 19:37:20.990337 kubelet[3635]: E1008 19:37:20.990263 3635 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 19:37:20.990337 kubelet[3635]: E1008 19:37:20.990308 3635 projected.go:200] Error preparing data for projected volume kube-api-access-qdnm7 for pod kube-system/kube-proxy-zbxh7: configmap "kube-root-ca.crt" not found Oct 8 19:37:20.990663 kubelet[3635]: E1008 19:37:20.990418 3635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/927d8614-6b62-4cbc-b36c-3bafec5bde7b-kube-api-access-qdnm7 podName:927d8614-6b62-4cbc-b36c-3bafec5bde7b nodeName:}" failed. No retries permitted until 2024-10-08 19:37:21.490383399 +0000 UTC m=+15.024240827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qdnm7" (UniqueName: "kubernetes.io/projected/927d8614-6b62-4cbc-b36c-3bafec5bde7b-kube-api-access-qdnm7") pod "kube-proxy-zbxh7" (UID: "927d8614-6b62-4cbc-b36c-3bafec5bde7b") : configmap "kube-root-ca.crt" not found Oct 8 19:37:21.599924 kubelet[3635]: I1008 19:37:21.599867 3635 topology_manager.go:215] "Topology Admit Handler" podUID="df52d86a-a1a2-4a82-9de9-dea50c43b572" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-25q9j" Oct 8 19:37:21.684957 kubelet[3635]: I1008 19:37:21.684849 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hg79\" (UniqueName: \"kubernetes.io/projected/df52d86a-a1a2-4a82-9de9-dea50c43b572-kube-api-access-5hg79\") pod \"tigera-operator-5d56685c77-25q9j\" (UID: \"df52d86a-a1a2-4a82-9de9-dea50c43b572\") " pod="tigera-operator/tigera-operator-5d56685c77-25q9j" Oct 8 19:37:21.684957 kubelet[3635]: I1008 19:37:21.684917 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df52d86a-a1a2-4a82-9de9-dea50c43b572-var-lib-calico\") pod \"tigera-operator-5d56685c77-25q9j\" (UID: \"df52d86a-a1a2-4a82-9de9-dea50c43b572\") " pod="tigera-operator/tigera-operator-5d56685c77-25q9j" Oct 8 19:37:21.756936 containerd[2143]: time="2024-10-08T19:37:21.756821782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbxh7,Uid:927d8614-6b62-4cbc-b36c-3bafec5bde7b,Namespace:kube-system,Attempt:0,}" Oct 8 19:37:21.824859 containerd[2143]: time="2024-10-08T19:37:21.823763871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:21.824859 containerd[2143]: time="2024-10-08T19:37:21.823874983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:21.824859 containerd[2143]: time="2024-10-08T19:37:21.823912572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:21.825149 containerd[2143]: time="2024-10-08T19:37:21.824707870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:21.905230 containerd[2143]: time="2024-10-08T19:37:21.905070717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbxh7,Uid:927d8614-6b62-4cbc-b36c-3bafec5bde7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"40aeeae83fdcfbd3d935a493d8dce66c1438b6c834e80af3d2ef02e3a90c35a1\"" Oct 8 19:37:21.913896 containerd[2143]: time="2024-10-08T19:37:21.912950427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-25q9j,Uid:df52d86a-a1a2-4a82-9de9-dea50c43b572,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:37:21.919897 containerd[2143]: time="2024-10-08T19:37:21.919628219Z" level=info msg="CreateContainer within sandbox \"40aeeae83fdcfbd3d935a493d8dce66c1438b6c834e80af3d2ef02e3a90c35a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:37:21.954887 containerd[2143]: time="2024-10-08T19:37:21.954807101Z" level=info msg="CreateContainer within sandbox \"40aeeae83fdcfbd3d935a493d8dce66c1438b6c834e80af3d2ef02e3a90c35a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"10d7484f53c89f6377844dd894c06ac5c2a7e2f60d5ff98e874b1e64edbfc3d1\"" Oct 8 19:37:21.957141 containerd[2143]: time="2024-10-08T19:37:21.956917961Z" level=info msg="StartContainer for \"10d7484f53c89f6377844dd894c06ac5c2a7e2f60d5ff98e874b1e64edbfc3d1\"" Oct 8 19:37:21.965763 containerd[2143]: time="2024-10-08T19:37:21.965314288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:21.965763 containerd[2143]: time="2024-10-08T19:37:21.965425125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:21.965763 containerd[2143]: time="2024-10-08T19:37:21.965533539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:21.966388 containerd[2143]: time="2024-10-08T19:37:21.965725311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:22.082966 containerd[2143]: time="2024-10-08T19:37:22.082871705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-25q9j,Uid:df52d86a-a1a2-4a82-9de9-dea50c43b572,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"838c89eb35b0228d3caf99a338d167bea91925f613f0f110cb440c9e02c2fd3b\"" Oct 8 19:37:22.093301 containerd[2143]: time="2024-10-08T19:37:22.091934731Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:37:22.105910 containerd[2143]: time="2024-10-08T19:37:22.105785894Z" level=info msg="StartContainer for \"10d7484f53c89f6377844dd894c06ac5c2a7e2f60d5ff98e874b1e64edbfc3d1\" returns successfully" Oct 8 19:37:23.802463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435391551.mount: Deactivated successfully. Oct 8 19:37:24.445875 containerd[2143]: time="2024-10-08T19:37:24.445776656Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:24.447487 containerd[2143]: time="2024-10-08T19:37:24.447426155Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485923" Oct 8 19:37:24.448875 containerd[2143]: time="2024-10-08T19:37:24.448792319Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:24.454081 containerd[2143]: time="2024-10-08T19:37:24.453770849Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:24.459941 containerd[2143]: time="2024-10-08T19:37:24.457515628Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.364344915s" Oct 8 19:37:24.459941 containerd[2143]: time="2024-10-08T19:37:24.457581547Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:37:24.463980 containerd[2143]: time="2024-10-08T19:37:24.463926373Z" level=info msg="CreateContainer within sandbox \"838c89eb35b0228d3caf99a338d167bea91925f613f0f110cb440c9e02c2fd3b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:37:24.484240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559186705.mount: Deactivated successfully. Oct 8 19:37:24.485745 containerd[2143]: time="2024-10-08T19:37:24.485337426Z" level=info msg="CreateContainer within sandbox \"838c89eb35b0228d3caf99a338d167bea91925f613f0f110cb440c9e02c2fd3b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a\"" Oct 8 19:37:24.487329 containerd[2143]: time="2024-10-08T19:37:24.487260820Z" level=info msg="StartContainer for \"725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a\"" Oct 8 19:37:24.581280 containerd[2143]: time="2024-10-08T19:37:24.581223471Z" level=info msg="StartContainer for \"725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a\" returns successfully" Oct 8 19:37:24.906052 kubelet[3635]: I1008 19:37:24.905921 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zbxh7" podStartSLOduration=4.905855247 podStartE2EDuration="4.905855247s" podCreationTimestamp="2024-10-08 19:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:22.901361421 +0000 UTC m=+16.435218861" watchObservedRunningTime="2024-10-08 19:37:24.905855247 +0000 UTC m=+18.439712687" Oct 8 19:37:29.545380 kubelet[3635]: I1008 19:37:29.545284 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-25q9j" podStartSLOduration=6.173263994 podStartE2EDuration="8.545196422s" podCreationTimestamp="2024-10-08 19:37:21 +0000 UTC" firstStartedPulling="2024-10-08 19:37:22.086020573 +0000 UTC m=+15.619878013" lastFinishedPulling="2024-10-08 19:37:24.457953001 +0000 UTC m=+17.991810441" observedRunningTime="2024-10-08 19:37:24.907663798 +0000 UTC m=+18.441521238" watchObservedRunningTime="2024-10-08 19:37:29.545196422 +0000 UTC m=+23.079053970" Oct 8 19:37:29.548493 kubelet[3635]: I1008 19:37:29.545597 3635 topology_manager.go:215] "Topology Admit Handler" podUID="68be8ccd-f3ac-4c8f-a0b8-e1706521839d" podNamespace="calico-system" podName="calico-typha-6b9dd586b8-mctks" Oct 8 19:37:29.638438 kubelet[3635]: I1008 19:37:29.638353 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/68be8ccd-f3ac-4c8f-a0b8-e1706521839d-typha-certs\") pod \"calico-typha-6b9dd586b8-mctks\" (UID: \"68be8ccd-f3ac-4c8f-a0b8-e1706521839d\") " pod="calico-system/calico-typha-6b9dd586b8-mctks" Oct 8 19:37:29.638661 kubelet[3635]: I1008 19:37:29.638464 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75m7n\" (UniqueName: \"kubernetes.io/projected/68be8ccd-f3ac-4c8f-a0b8-e1706521839d-kube-api-access-75m7n\") pod \"calico-typha-6b9dd586b8-mctks\" (UID: \"68be8ccd-f3ac-4c8f-a0b8-e1706521839d\") " pod="calico-system/calico-typha-6b9dd586b8-mctks" Oct 8 19:37:29.638661 kubelet[3635]: I1008 19:37:29.638518 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68be8ccd-f3ac-4c8f-a0b8-e1706521839d-tigera-ca-bundle\") pod \"calico-typha-6b9dd586b8-mctks\" (UID: \"68be8ccd-f3ac-4c8f-a0b8-e1706521839d\") " pod="calico-system/calico-typha-6b9dd586b8-mctks" Oct 8 19:37:29.844329 kubelet[3635]: I1008 19:37:29.844153 3635 topology_manager.go:215] "Topology Admit Handler" podUID="4d95e58d-275a-43eb-aa4f-84416383977f" podNamespace="calico-system" podName="calico-node-lz4ns" Oct 8 19:37:29.910567 containerd[2143]: time="2024-10-08T19:37:29.910496182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9dd586b8-mctks,Uid:68be8ccd-f3ac-4c8f-a0b8-e1706521839d,Namespace:calico-system,Attempt:0,}" Oct 8 19:37:29.945261 kubelet[3635]: I1008 19:37:29.944174 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-lib-modules\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.945261 kubelet[3635]: I1008 19:37:29.944994 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-policysync\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.945490 kubelet[3635]: I1008 19:37:29.945405 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-var-lib-calico\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.946563 kubelet[3635]: I1008 19:37:29.945615 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-cni-log-dir\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.946563 kubelet[3635]: I1008 19:37:29.946102 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-var-run-calico\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.946563 kubelet[3635]: I1008 19:37:29.946375 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-cni-bin-dir\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.948845 kubelet[3635]: I1008 19:37:29.946711 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtvmh\" (UniqueName: \"kubernetes.io/projected/4d95e58d-275a-43eb-aa4f-84416383977f-kube-api-access-gtvmh\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.948845 kubelet[3635]: I1008 19:37:29.946950 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-xtables-lock\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.948845 kubelet[3635]: I1008 19:37:29.947301 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-flexvol-driver-host\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.948845 kubelet[3635]: I1008 19:37:29.947727 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4d95e58d-275a-43eb-aa4f-84416383977f-node-certs\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.948845 kubelet[3635]: I1008 19:37:29.947801 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4d95e58d-275a-43eb-aa4f-84416383977f-cni-net-dir\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.949761 kubelet[3635]: I1008 19:37:29.947850 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d95e58d-275a-43eb-aa4f-84416383977f-tigera-ca-bundle\") pod \"calico-node-lz4ns\" (UID: \"4d95e58d-275a-43eb-aa4f-84416383977f\") " pod="calico-system/calico-node-lz4ns" Oct 8 19:37:29.985882 kubelet[3635]: I1008 19:37:29.985484 3635 topology_manager.go:215] "Topology Admit Handler" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" podNamespace="calico-system" podName="csi-node-driver-7rjnd" Oct 8 19:37:29.990980 kubelet[3635]: E1008 19:37:29.989910 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:30.014059 containerd[2143]: time="2024-10-08T19:37:30.009262658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:30.014059 containerd[2143]: time="2024-10-08T19:37:30.009373159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:30.014059 containerd[2143]: time="2024-10-08T19:37:30.009410796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:30.014059 containerd[2143]: time="2024-10-08T19:37:30.009591689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:30.053252 kubelet[3635]: I1008 19:37:30.050899 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16c955fe-656e-4e38-9e91-8bafa8ad1d2f-kubelet-dir\") pod \"csi-node-driver-7rjnd\" (UID: \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\") " pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:30.053252 kubelet[3635]: I1008 19:37:30.051011 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16c955fe-656e-4e38-9e91-8bafa8ad1d2f-socket-dir\") pod \"csi-node-driver-7rjnd\" (UID: \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\") " pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:30.055611 kubelet[3635]: E1008 19:37:30.055565 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.055921 kubelet[3635]: W1008 19:37:30.055884 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.056262 kubelet[3635]: E1008 19:37:30.056209 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.059934 kubelet[3635]: E1008 19:37:30.059153 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.059934 kubelet[3635]: W1008 19:37:30.059196 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.059934 kubelet[3635]: E1008 19:37:30.059255 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.059934 kubelet[3635]: I1008 19:37:30.059309 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16c955fe-656e-4e38-9e91-8bafa8ad1d2f-varrun\") pod \"csi-node-driver-7rjnd\" (UID: \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\") " pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:30.063161 kubelet[3635]: E1008 19:37:30.063106 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.063161 kubelet[3635]: W1008 19:37:30.063153 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.063696 kubelet[3635]: E1008 19:37:30.063206 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.066421 kubelet[3635]: E1008 19:37:30.065790 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.066421 kubelet[3635]: W1008 19:37:30.065829 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.066421 kubelet[3635]: E1008 19:37:30.065882 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.066732 kubelet[3635]: E1008 19:37:30.066468 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.066732 kubelet[3635]: W1008 19:37:30.066490 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.067965 kubelet[3635]: E1008 19:37:30.066983 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.068187 kubelet[3635]: E1008 19:37:30.068148 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.068290 kubelet[3635]: W1008 19:37:30.068184 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.068290 kubelet[3635]: E1008 19:37:30.068223 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.069248 kubelet[3635]: E1008 19:37:30.068934 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.069248 kubelet[3635]: W1008 19:37:30.068963 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.069248 kubelet[3635]: E1008 19:37:30.068994 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.070596 kubelet[3635]: E1008 19:37:30.069834 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.070596 kubelet[3635]: W1008 19:37:30.070096 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.070596 kubelet[3635]: E1008 19:37:30.070143 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.070596 kubelet[3635]: I1008 19:37:30.070210 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wndmd\" (UniqueName: \"kubernetes.io/projected/16c955fe-656e-4e38-9e91-8bafa8ad1d2f-kube-api-access-wndmd\") pod \"csi-node-driver-7rjnd\" (UID: \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\") " pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:30.072412 kubelet[3635]: E1008 19:37:30.071066 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.072412 kubelet[3635]: W1008 19:37:30.071105 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.072412 kubelet[3635]: E1008 19:37:30.071143 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.077993 kubelet[3635]: E1008 19:37:30.077488 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.077993 kubelet[3635]: W1008 19:37:30.077524 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.077993 kubelet[3635]: E1008 19:37:30.077563 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.081526 kubelet[3635]: E1008 19:37:30.080954 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.081526 kubelet[3635]: W1008 19:37:30.080994 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.081526 kubelet[3635]: E1008 19:37:30.081087 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.084583 kubelet[3635]: E1008 19:37:30.083765 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.084583 kubelet[3635]: W1008 19:37:30.083799 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.084583 kubelet[3635]: E1008 19:37:30.083860 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.085849 kubelet[3635]: E1008 19:37:30.085442 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.085849 kubelet[3635]: W1008 19:37:30.085485 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.085849 kubelet[3635]: E1008 19:37:30.085530 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.088132 kubelet[3635]: E1008 19:37:30.087836 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.088132 kubelet[3635]: W1008 19:37:30.087889 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.088132 kubelet[3635]: E1008 19:37:30.087942 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.098412 kubelet[3635]: E1008 19:37:30.098193 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.099153 kubelet[3635]: W1008 19:37:30.099104 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.099410 kubelet[3635]: E1008 19:37:30.099217 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.103630 kubelet[3635]: E1008 19:37:30.103118 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.103630 kubelet[3635]: W1008 19:37:30.103163 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.103630 kubelet[3635]: E1008 19:37:30.103209 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.106831 kubelet[3635]: E1008 19:37:30.106244 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.106831 kubelet[3635]: W1008 19:37:30.106278 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.106831 kubelet[3635]: E1008 19:37:30.106329 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.112808 kubelet[3635]: E1008 19:37:30.112272 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.113837 kubelet[3635]: W1008 19:37:30.112574 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.115339 kubelet[3635]: E1008 19:37:30.113607 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.129183 kubelet[3635]: E1008 19:37:30.129123 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.129183 kubelet[3635]: W1008 19:37:30.129170 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.129666 kubelet[3635]: E1008 19:37:30.129226 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.130416 kubelet[3635]: E1008 19:37:30.130175 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.130416 kubelet[3635]: W1008 19:37:30.130205 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.130416 kubelet[3635]: E1008 19:37:30.130242 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.131516 kubelet[3635]: E1008 19:37:30.131307 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.131516 kubelet[3635]: W1008 19:37:30.131338 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.131516 kubelet[3635]: E1008 19:37:30.131371 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.132341 kubelet[3635]: E1008 19:37:30.132077 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.132341 kubelet[3635]: W1008 19:37:30.132107 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.132341 kubelet[3635]: E1008 19:37:30.132140 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.133398 kubelet[3635]: E1008 19:37:30.132954 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.133398 kubelet[3635]: W1008 19:37:30.132984 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.133398 kubelet[3635]: E1008 19:37:30.133019 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.134906 kubelet[3635]: E1008 19:37:30.134425 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.134906 kubelet[3635]: W1008 19:37:30.134476 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.134906 kubelet[3635]: E1008 19:37:30.134566 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.138417 kubelet[3635]: E1008 19:37:30.135660 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.138417 kubelet[3635]: W1008 19:37:30.137128 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.138417 kubelet[3635]: E1008 19:37:30.137172 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.139683 kubelet[3635]: E1008 19:37:30.139647 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.139991 kubelet[3635]: W1008 19:37:30.139960 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.140548 kubelet[3635]: E1008 19:37:30.140206 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.141044 kubelet[3635]: E1008 19:37:30.141001 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.141609 kubelet[3635]: W1008 19:37:30.141185 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.141609 kubelet[3635]: E1008 19:37:30.141249 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.146381 kubelet[3635]: E1008 19:37:30.146260 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.146381 kubelet[3635]: W1008 19:37:30.146291 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.148955 kubelet[3635]: E1008 19:37:30.148128 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.148955 kubelet[3635]: E1008 19:37:30.148742 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.148955 kubelet[3635]: W1008 19:37:30.148767 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.149645 kubelet[3635]: E1008 19:37:30.149298 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.150967 kubelet[3635]: E1008 19:37:30.150555 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.150967 kubelet[3635]: W1008 19:37:30.150585 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.150967 kubelet[3635]: E1008 19:37:30.150656 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.153678 kubelet[3635]: E1008 19:37:30.153302 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.153678 kubelet[3635]: W1008 19:37:30.153338 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.153678 kubelet[3635]: E1008 19:37:30.153395 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.154651 kubelet[3635]: E1008 19:37:30.154138 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.154651 kubelet[3635]: W1008 19:37:30.154166 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.154651 kubelet[3635]: E1008 19:37:30.154199 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.156076 kubelet[3635]: E1008 19:37:30.155709 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.156076 kubelet[3635]: W1008 19:37:30.155743 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.156076 kubelet[3635]: E1008 19:37:30.155791 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.157339 kubelet[3635]: E1008 19:37:30.156771 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.157339 kubelet[3635]: W1008 19:37:30.156800 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.157339 kubelet[3635]: E1008 19:37:30.156955 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.158261 kubelet[3635]: E1008 19:37:30.158139 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.158653 kubelet[3635]: W1008 19:37:30.158379 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.158653 kubelet[3635]: E1008 19:37:30.158454 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.158653 kubelet[3635]: I1008 19:37:30.158560 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16c955fe-656e-4e38-9e91-8bafa8ad1d2f-registration-dir\") pod \"csi-node-driver-7rjnd\" (UID: \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\") " pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:30.159322 kubelet[3635]: E1008 19:37:30.159203 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.159750 kubelet[3635]: W1008 19:37:30.159437 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.160419 kubelet[3635]: E1008 19:37:30.160288 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.160847 kubelet[3635]: W1008 19:37:30.160561 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.161216 kubelet[3635]: E1008 19:37:30.161107 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.161216 kubelet[3635]: E1008 19:37:30.161164 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.161699 kubelet[3635]: E1008 19:37:30.161429 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.161699 kubelet[3635]: W1008 19:37:30.161452 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.161699 kubelet[3635]: E1008 19:37:30.161509 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.164080 kubelet[3635]: E1008 19:37:30.162278 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.164080 kubelet[3635]: W1008 19:37:30.162317 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.164444 kubelet[3635]: E1008 19:37:30.164100 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.164892 kubelet[3635]: E1008 19:37:30.164829 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.165086 kubelet[3635]: W1008 19:37:30.165052 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.167318 kubelet[3635]: E1008 19:37:30.166598 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.175688 kubelet[3635]: E1008 19:37:30.175649 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.176005 kubelet[3635]: W1008 19:37:30.175972 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.179429 kubelet[3635]: E1008 19:37:30.179252 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.179429 kubelet[3635]: W1008 19:37:30.179284 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.179938 kubelet[3635]: E1008 19:37:30.179789 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.179938 kubelet[3635]: W1008 19:37:30.179814 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.180818 kubelet[3635]: E1008 19:37:30.180533 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.180818 kubelet[3635]: W1008 19:37:30.180604 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.181569 kubelet[3635]: E1008 19:37:30.181393 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.181569 kubelet[3635]: W1008 19:37:30.181418 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.182196 kubelet[3635]: E1008 19:37:30.181844 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.182196 kubelet[3635]: W1008 19:37:30.181863 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.182196 kubelet[3635]: E1008 19:37:30.181892 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.182196 kubelet[3635]: E1008 19:37:30.181932 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.182873 kubelet[3635]: E1008 19:37:30.182826 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.184269 kubelet[3635]: W1008 19:37:30.183210 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.184269 kubelet[3635]: E1008 19:37:30.183279 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.185593 kubelet[3635]: E1008 19:37:30.185501 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.186586 kubelet[3635]: E1008 19:37:30.186557 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.186765 kubelet[3635]: W1008 19:37:30.186737 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.186893 kubelet[3635]: E1008 19:37:30.186871 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.187454 kubelet[3635]: E1008 19:37:30.187433 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.187572 kubelet[3635]: W1008 19:37:30.187551 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.187702 kubelet[3635]: E1008 19:37:30.187681 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.187865 kubelet[3635]: E1008 19:37:30.187847 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.188475 kubelet[3635]: E1008 19:37:30.188444 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.188979 kubelet[3635]: W1008 19:37:30.188661 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.188979 kubelet[3635]: E1008 19:37:30.188714 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.189377 kubelet[3635]: E1008 19:37:30.189355 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.189593 kubelet[3635]: W1008 19:37:30.189569 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.190002 kubelet[3635]: E1008 19:37:30.189727 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.190002 kubelet[3635]: E1008 19:37:30.189771 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.190617 kubelet[3635]: E1008 19:37:30.190572 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.192076 kubelet[3635]: W1008 19:37:30.190748 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.192076 kubelet[3635]: E1008 19:37:30.190812 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.193398 kubelet[3635]: E1008 19:37:30.193316 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.194422 kubelet[3635]: E1008 19:37:30.194385 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.194654 kubelet[3635]: W1008 19:37:30.194626 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.194871 kubelet[3635]: E1008 19:37:30.194846 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.195433 kubelet[3635]: E1008 19:37:30.195405 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.195598 kubelet[3635]: W1008 19:37:30.195573 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.195720 kubelet[3635]: E1008 19:37:30.195689 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.196396 kubelet[3635]: E1008 19:37:30.196366 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.196710 kubelet[3635]: W1008 19:37:30.196668 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.197008 kubelet[3635]: E1008 19:37:30.196981 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.202449 kubelet[3635]: E1008 19:37:30.202404 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.202820 kubelet[3635]: W1008 19:37:30.202769 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.205110 kubelet[3635]: E1008 19:37:30.203795 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.207700 kubelet[3635]: E1008 19:37:30.207534 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.208222 kubelet[3635]: W1008 19:37:30.208073 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.209240 kubelet[3635]: E1008 19:37:30.208419 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.210585 kubelet[3635]: E1008 19:37:30.210552 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.210801 kubelet[3635]: W1008 19:37:30.210773 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.210936 kubelet[3635]: E1008 19:37:30.210915 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.215934 kubelet[3635]: E1008 19:37:30.214816 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.215934 kubelet[3635]: W1008 19:37:30.214864 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.218350 kubelet[3635]: E1008 19:37:30.218311 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.218645 kubelet[3635]: E1008 19:37:30.218621 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.222205 kubelet[3635]: W1008 19:37:30.222132 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.222205 kubelet[3635]: E1008 19:37:30.222211 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.226286 kubelet[3635]: E1008 19:37:30.226227 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.226286 kubelet[3635]: W1008 19:37:30.226270 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.226488 kubelet[3635]: E1008 19:37:30.226322 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.229621 kubelet[3635]: E1008 19:37:30.229228 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.229621 kubelet[3635]: W1008 19:37:30.229273 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.229621 kubelet[3635]: E1008 19:37:30.229320 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.231937 kubelet[3635]: E1008 19:37:30.231590 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.231937 kubelet[3635]: W1008 19:37:30.231635 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.231937 kubelet[3635]: E1008 19:37:30.231715 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.254133 kubelet[3635]: E1008 19:37:30.254074 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.254506 kubelet[3635]: W1008 19:37:30.254362 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.254506 kubelet[3635]: E1008 19:37:30.254425 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.306175 kubelet[3635]: E1008 19:37:30.305934 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.306175 kubelet[3635]: W1008 19:37:30.305968 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.306175 kubelet[3635]: E1008 19:37:30.306005 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.308656 kubelet[3635]: E1008 19:37:30.307695 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.308656 kubelet[3635]: W1008 19:37:30.307728 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.308656 kubelet[3635]: E1008 19:37:30.307763 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.309707 kubelet[3635]: E1008 19:37:30.309573 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.309707 kubelet[3635]: W1008 19:37:30.309613 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.309707 kubelet[3635]: E1008 19:37:30.309669 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.312094 kubelet[3635]: E1008 19:37:30.312045 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.312094 kubelet[3635]: W1008 19:37:30.312084 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.312538 kubelet[3635]: E1008 19:37:30.312138 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.316648 kubelet[3635]: E1008 19:37:30.315540 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.316648 kubelet[3635]: W1008 19:37:30.315577 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.316648 kubelet[3635]: E1008 19:37:30.316297 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.319022 kubelet[3635]: E1008 19:37:30.318506 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.319022 kubelet[3635]: W1008 19:37:30.318560 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.320099 kubelet[3635]: E1008 19:37:30.319269 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.322643 kubelet[3635]: E1008 19:37:30.322462 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.323238 kubelet[3635]: W1008 19:37:30.322742 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.324076 kubelet[3635]: E1008 19:37:30.323730 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.324076 kubelet[3635]: E1008 19:37:30.324075 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.324762 containerd[2143]: time="2024-10-08T19:37:30.323821081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9dd586b8-mctks,Uid:68be8ccd-f3ac-4c8f-a0b8-e1706521839d,Namespace:calico-system,Attempt:0,} returns sandbox id \"80f3d226b76805856922992c05dca357d099ba97aa9d38b93fb3ce37c000f802\"" Oct 8 19:37:30.326091 kubelet[3635]: W1008 19:37:30.324123 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.326091 kubelet[3635]: E1008 19:37:30.324242 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.326091 kubelet[3635]: E1008 19:37:30.325193 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.326091 kubelet[3635]: W1008 19:37:30.325229 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.326091 kubelet[3635]: E1008 19:37:30.325557 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.328746 kubelet[3635]: E1008 19:37:30.328350 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.328746 kubelet[3635]: W1008 19:37:30.328388 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.328746 kubelet[3635]: E1008 19:37:30.328470 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.330150 kubelet[3635]: E1008 19:37:30.329863 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.330150 kubelet[3635]: W1008 19:37:30.329901 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.330567 kubelet[3635]: E1008 19:37:30.330175 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.333055 containerd[2143]: time="2024-10-08T19:37:30.332352137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:37:30.333196 kubelet[3635]: E1008 19:37:30.332676 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.333196 kubelet[3635]: W1008 19:37:30.332726 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.333196 kubelet[3635]: E1008 19:37:30.332815 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.334933 kubelet[3635]: E1008 19:37:30.333745 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.334933 kubelet[3635]: W1008 19:37:30.333820 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.334933 kubelet[3635]: E1008 19:37:30.334329 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.334933 kubelet[3635]: E1008 19:37:30.334856 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.334933 kubelet[3635]: W1008 19:37:30.334882 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.336514 kubelet[3635]: E1008 19:37:30.335200 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.336514 kubelet[3635]: E1008 19:37:30.335782 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.336514 kubelet[3635]: W1008 19:37:30.335818 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.336514 kubelet[3635]: E1008 19:37:30.336075 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.337445 kubelet[3635]: E1008 19:37:30.336711 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.337445 kubelet[3635]: W1008 19:37:30.336742 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.337445 kubelet[3635]: E1008 19:37:30.336858 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.339866 kubelet[3635]: E1008 19:37:30.338398 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.339866 kubelet[3635]: W1008 19:37:30.338427 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.339866 kubelet[3635]: E1008 19:37:30.338593 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.339866 kubelet[3635]: E1008 19:37:30.339609 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.339866 kubelet[3635]: W1008 19:37:30.339764 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.340337 kubelet[3635]: E1008 19:37:30.339934 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.340960 kubelet[3635]: E1008 19:37:30.340909 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.341538 kubelet[3635]: W1008 19:37:30.340944 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.342153 kubelet[3635]: E1008 19:37:30.341207 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.343653 kubelet[3635]: E1008 19:37:30.342908 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.343653 kubelet[3635]: W1008 19:37:30.342985 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.343653 kubelet[3635]: E1008 19:37:30.343226 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.343915 kubelet[3635]: E1008 19:37:30.343818 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.343915 kubelet[3635]: W1008 19:37:30.343839 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.344054 kubelet[3635]: E1008 19:37:30.344005 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.345534 kubelet[3635]: E1008 19:37:30.344612 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.345534 kubelet[3635]: W1008 19:37:30.344641 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.345534 kubelet[3635]: E1008 19:37:30.344782 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.345534 kubelet[3635]: E1008 19:37:30.345130 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.345534 kubelet[3635]: W1008 19:37:30.345147 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.347973 kubelet[3635]: E1008 19:37:30.346842 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.347973 kubelet[3635]: E1008 19:37:30.348118 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.349624 kubelet[3635]: W1008 19:37:30.348164 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.349624 kubelet[3635]: E1008 19:37:30.348691 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.354583 kubelet[3635]: E1008 19:37:30.354259 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.354583 kubelet[3635]: W1008 19:37:30.354299 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.354583 kubelet[3635]: E1008 19:37:30.354338 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.395824 kubelet[3635]: E1008 19:37:30.395659 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:30.395824 kubelet[3635]: W1008 19:37:30.395691 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:30.396384 kubelet[3635]: E1008 19:37:30.395950 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:30.467393 containerd[2143]: time="2024-10-08T19:37:30.466905471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lz4ns,Uid:4d95e58d-275a-43eb-aa4f-84416383977f,Namespace:calico-system,Attempt:0,}" Oct 8 19:37:30.524274 containerd[2143]: time="2024-10-08T19:37:30.523931197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:30.524274 containerd[2143]: time="2024-10-08T19:37:30.524078963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:30.525055 containerd[2143]: time="2024-10-08T19:37:30.524857193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:30.526549 containerd[2143]: time="2024-10-08T19:37:30.526256377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:30.619191 containerd[2143]: time="2024-10-08T19:37:30.618812744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lz4ns,Uid:4d95e58d-275a-43eb-aa4f-84416383977f,Namespace:calico-system,Attempt:0,} returns sandbox id \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\"" Oct 8 19:37:31.767841 kubelet[3635]: E1008 19:37:31.767403 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:32.837476 containerd[2143]: time="2024-10-08T19:37:32.837410829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:32.839082 containerd[2143]: time="2024-10-08T19:37:32.838989743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:37:32.841323 containerd[2143]: time="2024-10-08T19:37:32.841262918Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:32.848055 containerd[2143]: time="2024-10-08T19:37:32.847645417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:32.850558 containerd[2143]: time="2024-10-08T19:37:32.849981620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.5151005s" Oct 8 19:37:32.850558 containerd[2143]: time="2024-10-08T19:37:32.850076229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:37:32.852618 containerd[2143]: time="2024-10-08T19:37:32.852189212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:37:32.900732 containerd[2143]: time="2024-10-08T19:37:32.899646712Z" level=info msg="CreateContainer within sandbox \"80f3d226b76805856922992c05dca357d099ba97aa9d38b93fb3ce37c000f802\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:37:32.933247 containerd[2143]: time="2024-10-08T19:37:32.933083729Z" level=info msg="CreateContainer within sandbox \"80f3d226b76805856922992c05dca357d099ba97aa9d38b93fb3ce37c000f802\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5a383b978a2f7471fab0e34f01a0c0564619090f26af7611ef4615831e789077\"" Oct 8 19:37:32.939233 containerd[2143]: time="2024-10-08T19:37:32.938569473Z" level=info msg="StartContainer for \"5a383b978a2f7471fab0e34f01a0c0564619090f26af7611ef4615831e789077\"" Oct 8 19:37:33.164683 containerd[2143]: time="2024-10-08T19:37:33.164524403Z" level=info msg="StartContainer for \"5a383b978a2f7471fab0e34f01a0c0564619090f26af7611ef4615831e789077\" returns successfully" Oct 8 19:37:33.769630 kubelet[3635]: E1008 19:37:33.766791 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:33.870759 systemd[1]: run-containerd-runc-k8s.io-5a383b978a2f7471fab0e34f01a0c0564619090f26af7611ef4615831e789077-runc.1D7Jkl.mount: Deactivated successfully. Oct 8 19:37:33.990626 kubelet[3635]: E1008 19:37:33.989008 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.990626 kubelet[3635]: W1008 19:37:33.989091 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.990626 kubelet[3635]: E1008 19:37:33.989153 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.990626 kubelet[3635]: E1008 19:37:33.989723 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.990626 kubelet[3635]: W1008 19:37:33.989747 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.990626 kubelet[3635]: E1008 19:37:33.989804 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.992294 kubelet[3635]: E1008 19:37:33.991678 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.992294 kubelet[3635]: W1008 19:37:33.991741 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.992294 kubelet[3635]: E1008 19:37:33.991778 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.993570 kubelet[3635]: E1008 19:37:33.992518 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.993570 kubelet[3635]: W1008 19:37:33.992557 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.993570 kubelet[3635]: E1008 19:37:33.992598 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.993570 kubelet[3635]: E1008 19:37:33.993211 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.993570 kubelet[3635]: W1008 19:37:33.993234 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.993570 kubelet[3635]: E1008 19:37:33.993264 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.993934 kubelet[3635]: E1008 19:37:33.993671 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.993934 kubelet[3635]: W1008 19:37:33.993691 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.993934 kubelet[3635]: E1008 19:37:33.993718 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.995576 kubelet[3635]: E1008 19:37:33.994115 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.995576 kubelet[3635]: W1008 19:37:33.994135 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.995576 kubelet[3635]: E1008 19:37:33.994163 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.995576 kubelet[3635]: E1008 19:37:33.994497 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.995576 kubelet[3635]: W1008 19:37:33.994516 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.995576 kubelet[3635]: E1008 19:37:33.994544 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.995576 kubelet[3635]: E1008 19:37:33.995377 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.995576 kubelet[3635]: W1008 19:37:33.995402 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.995576 kubelet[3635]: E1008 19:37:33.995433 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.997297 kubelet[3635]: E1008 19:37:33.995889 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.997297 kubelet[3635]: W1008 19:37:33.995906 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.997297 kubelet[3635]: E1008 19:37:33.995956 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.997297 kubelet[3635]: E1008 19:37:33.996507 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.997297 kubelet[3635]: W1008 19:37:33.996544 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.997297 kubelet[3635]: E1008 19:37:33.996578 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.997716 kubelet[3635]: E1008 19:37:33.997488 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.997716 kubelet[3635]: W1008 19:37:33.997519 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.997716 kubelet[3635]: E1008 19:37:33.997553 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.999823 kubelet[3635]: E1008 19:37:33.997936 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.999823 kubelet[3635]: W1008 19:37:33.997965 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.999823 kubelet[3635]: E1008 19:37:33.997993 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.999823 kubelet[3635]: E1008 19:37:33.998789 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.999823 kubelet[3635]: W1008 19:37:33.998814 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.999823 kubelet[3635]: E1008 19:37:33.998845 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:33.999823 kubelet[3635]: E1008 19:37:33.999317 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:33.999823 kubelet[3635]: W1008 19:37:33.999394 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:33.999823 kubelet[3635]: E1008 19:37:33.999430 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.068180 kubelet[3635]: E1008 19:37:34.067578 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.074730 kubelet[3635]: W1008 19:37:34.074668 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.074903 kubelet[3635]: E1008 19:37:34.074750 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.076446 kubelet[3635]: E1008 19:37:34.076404 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.076446 kubelet[3635]: W1008 19:37:34.076442 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.076649 kubelet[3635]: E1008 19:37:34.076488 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.080216 kubelet[3635]: E1008 19:37:34.079094 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.080216 kubelet[3635]: W1008 19:37:34.079130 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.080216 kubelet[3635]: E1008 19:37:34.079221 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.080216 kubelet[3635]: E1008 19:37:34.080138 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.080216 kubelet[3635]: W1008 19:37:34.080162 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.082496 kubelet[3635]: E1008 19:37:34.081159 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.082496 kubelet[3635]: W1008 19:37:34.081187 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.082496 kubelet[3635]: E1008 19:37:34.082124 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.082496 kubelet[3635]: E1008 19:37:34.082379 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.082496 kubelet[3635]: E1008 19:37:34.082439 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.082496 kubelet[3635]: W1008 19:37:34.082456 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.082496 kubelet[3635]: E1008 19:37:34.082484 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.086654 kubelet[3635]: E1008 19:37:34.085320 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.086654 kubelet[3635]: W1008 19:37:34.085467 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.086654 kubelet[3635]: E1008 19:37:34.085830 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.086654 kubelet[3635]: E1008 19:37:34.086455 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.086654 kubelet[3635]: W1008 19:37:34.086478 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.086654 kubelet[3635]: E1008 19:37:34.086511 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.089673 kubelet[3635]: E1008 19:37:34.087774 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.089673 kubelet[3635]: W1008 19:37:34.087809 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.089673 kubelet[3635]: E1008 19:37:34.087847 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.089673 kubelet[3635]: E1008 19:37:34.088867 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.089673 kubelet[3635]: W1008 19:37:34.088918 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.089673 kubelet[3635]: E1008 19:37:34.088985 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.089673 kubelet[3635]: E1008 19:37:34.089360 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.089673 kubelet[3635]: W1008 19:37:34.089379 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.090209 kubelet[3635]: E1008 19:37:34.089705 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.090209 kubelet[3635]: W1008 19:37:34.089722 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.090209 kubelet[3635]: E1008 19:37:34.089785 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.090209 kubelet[3635]: E1008 19:37:34.090185 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.090425 kubelet[3635]: W1008 19:37:34.090236 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.090425 kubelet[3635]: E1008 19:37:34.090263 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.090756 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.096117 kubelet[3635]: W1008 19:37:34.090807 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.090837 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.091829 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.096117 kubelet[3635]: W1008 19:37:34.091853 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.091882 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.091958 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.092651 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.096117 kubelet[3635]: W1008 19:37:34.092674 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.096117 kubelet[3635]: E1008 19:37:34.092705 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.098858 kubelet[3635]: E1008 19:37:34.093873 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.098858 kubelet[3635]: W1008 19:37:34.093897 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.098858 kubelet[3635]: E1008 19:37:34.093928 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.104022 kubelet[3635]: E1008 19:37:34.098289 3635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:37:34.104173 kubelet[3635]: W1008 19:37:34.104016 3635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:37:34.104229 kubelet[3635]: E1008 19:37:34.104194 3635 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:37:34.254444 containerd[2143]: time="2024-10-08T19:37:34.254371810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:34.256545 containerd[2143]: time="2024-10-08T19:37:34.256387198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:37:34.258421 containerd[2143]: time="2024-10-08T19:37:34.258356901Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:34.269045 containerd[2143]: time="2024-10-08T19:37:34.268948526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:34.272311 containerd[2143]: time="2024-10-08T19:37:34.272200483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.419654894s" Oct 8 19:37:34.272311 containerd[2143]: time="2024-10-08T19:37:34.272310564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:37:34.277375 containerd[2143]: time="2024-10-08T19:37:34.276656171Z" level=info msg="CreateContainer within sandbox \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:37:34.335181 containerd[2143]: time="2024-10-08T19:37:34.334489452Z" level=info msg="CreateContainer within sandbox \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c8a7dced44847cc69a6ecd979ffb2a98f093462e7075214768fe9465f139784e\"" Oct 8 19:37:34.357115 containerd[2143]: time="2024-10-08T19:37:34.352078581Z" level=info msg="StartContainer for \"c8a7dced44847cc69a6ecd979ffb2a98f093462e7075214768fe9465f139784e\"" Oct 8 19:37:34.536364 containerd[2143]: time="2024-10-08T19:37:34.536302019Z" level=info msg="StartContainer for \"c8a7dced44847cc69a6ecd979ffb2a98f093462e7075214768fe9465f139784e\" returns successfully" Oct 8 19:37:34.871303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8a7dced44847cc69a6ecd979ffb2a98f093462e7075214768fe9465f139784e-rootfs.mount: Deactivated successfully. Oct 8 19:37:34.974085 kubelet[3635]: I1008 19:37:34.973990 3635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:37:35.015545 kubelet[3635]: I1008 19:37:35.014464 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6b9dd586b8-mctks" podStartSLOduration=3.494480287 podStartE2EDuration="6.014401308s" podCreationTimestamp="2024-10-08 19:37:29 +0000 UTC" firstStartedPulling="2024-10-08 19:37:30.331333355 +0000 UTC m=+23.865190783" lastFinishedPulling="2024-10-08 19:37:32.851254376 +0000 UTC m=+26.385111804" observedRunningTime="2024-10-08 19:37:33.981948933 +0000 UTC m=+27.515806373" watchObservedRunningTime="2024-10-08 19:37:35.014401308 +0000 UTC m=+28.548258736" Oct 8 19:37:35.241965 containerd[2143]: time="2024-10-08T19:37:35.241820850Z" level=info msg="shim disconnected" id=c8a7dced44847cc69a6ecd979ffb2a98f093462e7075214768fe9465f139784e namespace=k8s.io Oct 8 19:37:35.241965 containerd[2143]: time="2024-10-08T19:37:35.241950361Z" level=warning msg="cleaning up after shim disconnected" id=c8a7dced44847cc69a6ecd979ffb2a98f093462e7075214768fe9465f139784e namespace=k8s.io Oct 8 19:37:35.241965 containerd[2143]: time="2024-10-08T19:37:35.241973689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:37:35.766350 kubelet[3635]: E1008 19:37:35.766262 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:35.983740 containerd[2143]: time="2024-10-08T19:37:35.983282220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:37:37.766687 kubelet[3635]: E1008 19:37:37.766629 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:38.995445 kubelet[3635]: I1008 19:37:38.994790 3635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:37:39.767269 kubelet[3635]: E1008 19:37:39.767224 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:39.854607 containerd[2143]: time="2024-10-08T19:37:39.853972178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:39.856117 containerd[2143]: time="2024-10-08T19:37:39.855989785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:37:39.858280 containerd[2143]: time="2024-10-08T19:37:39.858207151Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:39.862999 containerd[2143]: time="2024-10-08T19:37:39.862913537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:39.865740 containerd[2143]: time="2024-10-08T19:37:39.865562988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.882208325s" Oct 8 19:37:39.865740 containerd[2143]: time="2024-10-08T19:37:39.865617297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:37:39.869945 containerd[2143]: time="2024-10-08T19:37:39.869809511Z" level=info msg="CreateContainer within sandbox \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:37:39.892624 containerd[2143]: time="2024-10-08T19:37:39.892497314Z" level=info msg="CreateContainer within sandbox \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3acb7edd966772b8a40e07987b46208ad41ded7ada4f2a8a6dd719300d4c6fd5\"" Oct 8 19:37:39.894251 containerd[2143]: time="2024-10-08T19:37:39.894009769Z" level=info msg="StartContainer for \"3acb7edd966772b8a40e07987b46208ad41ded7ada4f2a8a6dd719300d4c6fd5\"" Oct 8 19:37:40.005387 containerd[2143]: time="2024-10-08T19:37:40.005297622Z" level=info msg="StartContainer for \"3acb7edd966772b8a40e07987b46208ad41ded7ada4f2a8a6dd719300d4c6fd5\" returns successfully" Oct 8 19:37:41.242172 containerd[2143]: time="2024-10-08T19:37:41.242080189Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:37:41.283996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3acb7edd966772b8a40e07987b46208ad41ded7ada4f2a8a6dd719300d4c6fd5-rootfs.mount: Deactivated successfully. Oct 8 19:37:41.299165 kubelet[3635]: I1008 19:37:41.299095 3635 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:37:41.343490 kubelet[3635]: I1008 19:37:41.343409 3635 topology_manager.go:215] "Topology Admit Handler" podUID="887efa35-ebd2-44fd-a538-5709c92fc6d0" podNamespace="kube-system" podName="coredns-76f75df574-9xvcf" Oct 8 19:37:41.347954 kubelet[3635]: I1008 19:37:41.347315 3635 topology_manager.go:215] "Topology Admit Handler" podUID="855c6ee6-b78c-4f49-b63e-5d329126787e" podNamespace="kube-system" podName="coredns-76f75df574-ns6ms" Oct 8 19:37:41.365304 kubelet[3635]: I1008 19:37:41.362403 3635 topology_manager.go:215] "Topology Admit Handler" podUID="8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50" podNamespace="calico-system" podName="calico-kube-controllers-668d4c9556-nvw6q" Oct 8 19:37:41.438269 kubelet[3635]: I1008 19:37:41.438196 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855c6ee6-b78c-4f49-b63e-5d329126787e-config-volume\") pod \"coredns-76f75df574-ns6ms\" (UID: \"855c6ee6-b78c-4f49-b63e-5d329126787e\") " pod="kube-system/coredns-76f75df574-ns6ms" Oct 8 19:37:41.438632 kubelet[3635]: I1008 19:37:41.438610 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50-tigera-ca-bundle\") pod \"calico-kube-controllers-668d4c9556-nvw6q\" (UID: \"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50\") " pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" Oct 8 19:37:41.438853 kubelet[3635]: I1008 19:37:41.438815 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kkc5\" (UniqueName: \"kubernetes.io/projected/887efa35-ebd2-44fd-a538-5709c92fc6d0-kube-api-access-9kkc5\") pod \"coredns-76f75df574-9xvcf\" (UID: \"887efa35-ebd2-44fd-a538-5709c92fc6d0\") " pod="kube-system/coredns-76f75df574-9xvcf" Oct 8 19:37:41.439076 kubelet[3635]: I1008 19:37:41.439022 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7vx\" (UniqueName: \"kubernetes.io/projected/8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50-kube-api-access-pf7vx\") pod \"calico-kube-controllers-668d4c9556-nvw6q\" (UID: \"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50\") " pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" Oct 8 19:37:41.439329 kubelet[3635]: I1008 19:37:41.439308 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54gh\" (UniqueName: \"kubernetes.io/projected/855c6ee6-b78c-4f49-b63e-5d329126787e-kube-api-access-l54gh\") pod \"coredns-76f75df574-ns6ms\" (UID: \"855c6ee6-b78c-4f49-b63e-5d329126787e\") " pod="kube-system/coredns-76f75df574-ns6ms" Oct 8 19:37:41.439521 kubelet[3635]: I1008 19:37:41.439486 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887efa35-ebd2-44fd-a538-5709c92fc6d0-config-volume\") pod \"coredns-76f75df574-9xvcf\" (UID: \"887efa35-ebd2-44fd-a538-5709c92fc6d0\") " pod="kube-system/coredns-76f75df574-9xvcf" Oct 8 19:37:41.672142 containerd[2143]: time="2024-10-08T19:37:41.671927279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9xvcf,Uid:887efa35-ebd2-44fd-a538-5709c92fc6d0,Namespace:kube-system,Attempt:0,}" Oct 8 19:37:41.673860 containerd[2143]: time="2024-10-08T19:37:41.673765108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ns6ms,Uid:855c6ee6-b78c-4f49-b63e-5d329126787e,Namespace:kube-system,Attempt:0,}" Oct 8 19:37:41.685612 containerd[2143]: time="2024-10-08T19:37:41.685545243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668d4c9556-nvw6q,Uid:8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50,Namespace:calico-system,Attempt:0,}" Oct 8 19:37:41.775251 containerd[2143]: time="2024-10-08T19:37:41.774706312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rjnd,Uid:16c955fe-656e-4e38-9e91-8bafa8ad1d2f,Namespace:calico-system,Attempt:0,}" Oct 8 19:37:42.087380 containerd[2143]: time="2024-10-08T19:37:42.086930199Z" level=info msg="shim disconnected" id=3acb7edd966772b8a40e07987b46208ad41ded7ada4f2a8a6dd719300d4c6fd5 namespace=k8s.io Oct 8 19:37:42.087380 containerd[2143]: time="2024-10-08T19:37:42.087015692Z" level=warning msg="cleaning up after shim disconnected" id=3acb7edd966772b8a40e07987b46208ad41ded7ada4f2a8a6dd719300d4c6fd5 namespace=k8s.io Oct 8 19:37:42.087380 containerd[2143]: time="2024-10-08T19:37:42.087119776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:37:42.323520 containerd[2143]: time="2024-10-08T19:37:42.323270818Z" level=error msg="Failed to destroy network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.327462 containerd[2143]: time="2024-10-08T19:37:42.327397414Z" level=error msg="encountered an error cleaning up failed sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.330192 containerd[2143]: time="2024-10-08T19:37:42.330107326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9xvcf,Uid:887efa35-ebd2-44fd-a538-5709c92fc6d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.331362 kubelet[3635]: E1008 19:37:42.331326 3635 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.331911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a-shm.mount: Deactivated successfully. Oct 8 19:37:42.332853 kubelet[3635]: E1008 19:37:42.332239 3635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9xvcf" Oct 8 19:37:42.332853 kubelet[3635]: E1008 19:37:42.332407 3635 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9xvcf" Oct 8 19:37:42.334467 kubelet[3635]: E1008 19:37:42.333438 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9xvcf_kube-system(887efa35-ebd2-44fd-a538-5709c92fc6d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9xvcf_kube-system(887efa35-ebd2-44fd-a538-5709c92fc6d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9xvcf" podUID="887efa35-ebd2-44fd-a538-5709c92fc6d0" Oct 8 19:37:42.349242 containerd[2143]: time="2024-10-08T19:37:42.349089871Z" level=error msg="Failed to destroy network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.354281 containerd[2143]: time="2024-10-08T19:37:42.352656898Z" level=error msg="encountered an error cleaning up failed sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.355108 containerd[2143]: time="2024-10-08T19:37:42.354620916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668d4c9556-nvw6q,Uid:8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.358064 kubelet[3635]: E1008 19:37:42.356130 3635 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.358064 kubelet[3635]: E1008 19:37:42.356205 3635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" Oct 8 19:37:42.358064 kubelet[3635]: E1008 19:37:42.356244 3635 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" Oct 8 19:37:42.356480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12-shm.mount: Deactivated successfully. Oct 8 19:37:42.358457 kubelet[3635]: E1008 19:37:42.356325 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-668d4c9556-nvw6q_calico-system(8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-668d4c9556-nvw6q_calico-system(8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" podUID="8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50" Oct 8 19:37:42.367762 containerd[2143]: time="2024-10-08T19:37:42.367677848Z" level=error msg="Failed to destroy network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.371094 containerd[2143]: time="2024-10-08T19:37:42.370217685Z" level=error msg="encountered an error cleaning up failed sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.371094 containerd[2143]: time="2024-10-08T19:37:42.370320138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rjnd,Uid:16c955fe-656e-4e38-9e91-8bafa8ad1d2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.371094 containerd[2143]: time="2024-10-08T19:37:42.370252804Z" level=error msg="Failed to destroy network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.374788 kubelet[3635]: E1008 19:37:42.373236 3635 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.374788 kubelet[3635]: E1008 19:37:42.373312 3635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:42.374788 kubelet[3635]: E1008 19:37:42.373358 3635 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rjnd" Oct 8 19:37:42.375696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2-shm.mount: Deactivated successfully. Oct 8 19:37:42.377304 kubelet[3635]: E1008 19:37:42.375921 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rjnd_calico-system(16c955fe-656e-4e38-9e91-8bafa8ad1d2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rjnd_calico-system(16c955fe-656e-4e38-9e91-8bafa8ad1d2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:42.381587 containerd[2143]: time="2024-10-08T19:37:42.378553803Z" level=error msg="encountered an error cleaning up failed sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.381587 containerd[2143]: time="2024-10-08T19:37:42.380356010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ns6ms,Uid:855c6ee6-b78c-4f49-b63e-5d329126787e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.381825 kubelet[3635]: E1008 19:37:42.381001 3635 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:42.381825 kubelet[3635]: E1008 19:37:42.381116 3635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ns6ms" Oct 8 19:37:42.381825 kubelet[3635]: E1008 19:37:42.381180 3635 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ns6ms" Oct 8 19:37:42.382121 kubelet[3635]: E1008 19:37:42.381276 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ns6ms_kube-system(855c6ee6-b78c-4f49-b63e-5d329126787e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ns6ms_kube-system(855c6ee6-b78c-4f49-b63e-5d329126787e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ns6ms" podUID="855c6ee6-b78c-4f49-b63e-5d329126787e" Oct 8 19:37:42.386804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae-shm.mount: Deactivated successfully. Oct 8 19:37:43.012977 kubelet[3635]: I1008 19:37:43.012935 3635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:37:43.015804 containerd[2143]: time="2024-10-08T19:37:43.014961982Z" level=info msg="StopPodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\"" Oct 8 19:37:43.015804 containerd[2143]: time="2024-10-08T19:37:43.015276321Z" level=info msg="Ensure that sandbox 28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a in task-service has been cleanup successfully" Oct 8 19:37:43.017800 kubelet[3635]: I1008 19:37:43.017758 3635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:37:43.019409 containerd[2143]: time="2024-10-08T19:37:43.019359426Z" level=info msg="StopPodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\"" Oct 8 19:37:43.020779 containerd[2143]: time="2024-10-08T19:37:43.020012140Z" level=info msg="Ensure that sandbox 80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae in task-service has been cleanup successfully" Oct 8 19:37:43.030095 kubelet[3635]: I1008 19:37:43.027302 3635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:37:43.030578 containerd[2143]: time="2024-10-08T19:37:43.030468797Z" level=info msg="StopPodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\"" Oct 8 19:37:43.033655 containerd[2143]: time="2024-10-08T19:37:43.030758632Z" level=info msg="Ensure that sandbox b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2 in task-service has been cleanup successfully" Oct 8 19:37:43.039943 kubelet[3635]: I1008 19:37:43.039871 3635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:37:43.043202 containerd[2143]: time="2024-10-08T19:37:43.042198413Z" level=info msg="StopPodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\"" Oct 8 19:37:43.043202 containerd[2143]: time="2024-10-08T19:37:43.042552296Z" level=info msg="Ensure that sandbox 2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12 in task-service has been cleanup successfully" Oct 8 19:37:43.070996 containerd[2143]: time="2024-10-08T19:37:43.068476439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:37:43.197568 containerd[2143]: time="2024-10-08T19:37:43.197491412Z" level=error msg="StopPodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" failed" error="failed to destroy network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:43.198328 kubelet[3635]: E1008 19:37:43.197978 3635 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:37:43.198328 kubelet[3635]: E1008 19:37:43.198114 3635 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae"} Oct 8 19:37:43.198328 kubelet[3635]: E1008 19:37:43.198186 3635 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"855c6ee6-b78c-4f49-b63e-5d329126787e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:43.199042 kubelet[3635]: E1008 19:37:43.198893 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"855c6ee6-b78c-4f49-b63e-5d329126787e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ns6ms" podUID="855c6ee6-b78c-4f49-b63e-5d329126787e" Oct 8 19:37:43.206354 containerd[2143]: time="2024-10-08T19:37:43.206272795Z" level=error msg="StopPodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" failed" error="failed to destroy network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:43.206811 kubelet[3635]: E1008 19:37:43.206627 3635 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:37:43.206811 kubelet[3635]: E1008 19:37:43.206743 3635 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12"} Oct 8 19:37:43.207592 kubelet[3635]: E1008 19:37:43.207367 3635 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:43.207813 kubelet[3635]: E1008 19:37:43.207664 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" podUID="8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50" Oct 8 19:37:43.208943 containerd[2143]: time="2024-10-08T19:37:43.208843085Z" level=error msg="StopPodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" failed" error="failed to destroy network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:43.209618 kubelet[3635]: E1008 19:37:43.209565 3635 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:37:43.209840 kubelet[3635]: E1008 19:37:43.209757 3635 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2"} Oct 8 19:37:43.210254 kubelet[3635]: E1008 19:37:43.210089 3635 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:43.210254 kubelet[3635]: E1008 19:37:43.210207 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16c955fe-656e-4e38-9e91-8bafa8ad1d2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rjnd" podUID="16c955fe-656e-4e38-9e91-8bafa8ad1d2f" Oct 8 19:37:43.211355 containerd[2143]: time="2024-10-08T19:37:43.210873550Z" level=error msg="StopPodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" failed" error="failed to destroy network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:43.211498 kubelet[3635]: E1008 19:37:43.211413 3635 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:37:43.211498 kubelet[3635]: E1008 19:37:43.211477 3635 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a"} Oct 8 19:37:43.211654 kubelet[3635]: E1008 19:37:43.211538 3635 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"887efa35-ebd2-44fd-a538-5709c92fc6d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:43.211654 kubelet[3635]: E1008 19:37:43.211596 3635 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"887efa35-ebd2-44fd-a538-5709c92fc6d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9xvcf" podUID="887efa35-ebd2-44fd-a538-5709c92fc6d0" Oct 8 19:37:48.687333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417005183.mount: Deactivated successfully. Oct 8 19:37:48.757187 containerd[2143]: time="2024-10-08T19:37:48.757102524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:48.759318 containerd[2143]: time="2024-10-08T19:37:48.759258974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:37:48.761304 containerd[2143]: time="2024-10-08T19:37:48.761205468Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:48.764689 containerd[2143]: time="2024-10-08T19:37:48.764605815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:48.766198 containerd[2143]: time="2024-10-08T19:37:48.765910402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 5.696609112s" Oct 8 19:37:48.766198 containerd[2143]: time="2024-10-08T19:37:48.765978888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:37:48.795527 containerd[2143]: time="2024-10-08T19:37:48.791935931Z" level=info msg="CreateContainer within sandbox \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:37:48.827428 containerd[2143]: time="2024-10-08T19:37:48.827343515Z" level=info msg="CreateContainer within sandbox \"419006a8865fdef64635b66be1018740f67d78af8cab082a911d619fe825b54d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"807572133ecb49fcfa22e44eddc7b1d2ab53e535c25ab4927b29e32229daf284\"" Oct 8 19:37:48.831150 containerd[2143]: time="2024-10-08T19:37:48.829087478Z" level=info msg="StartContainer for \"807572133ecb49fcfa22e44eddc7b1d2ab53e535c25ab4927b29e32229daf284\"" Oct 8 19:37:48.940011 containerd[2143]: time="2024-10-08T19:37:48.939670719Z" level=info msg="StartContainer for \"807572133ecb49fcfa22e44eddc7b1d2ab53e535c25ab4927b29e32229daf284\" returns successfully" Oct 8 19:37:49.061095 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:37:49.061370 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:37:49.117518 kubelet[3635]: I1008 19:37:49.117438 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-lz4ns" podStartSLOduration=1.973252944 podStartE2EDuration="20.117375095s" podCreationTimestamp="2024-10-08 19:37:29 +0000 UTC" firstStartedPulling="2024-10-08 19:37:30.622313505 +0000 UTC m=+24.156170921" lastFinishedPulling="2024-10-08 19:37:48.766435644 +0000 UTC m=+42.300293072" observedRunningTime="2024-10-08 19:37:49.115594994 +0000 UTC m=+42.649452434" watchObservedRunningTime="2024-10-08 19:37:49.117375095 +0000 UTC m=+42.651232547" Oct 8 19:37:51.292074 kernel: bpftool[4797]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:37:51.608917 (udev-worker)[4646]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:51.612789 systemd-networkd[1685]: vxlan.calico: Link UP Oct 8 19:37:51.613015 systemd-networkd[1685]: vxlan.calico: Gained carrier Oct 8 19:37:51.665223 (udev-worker)[4644]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:52.744416 kubelet[3635]: I1008 19:37:52.744337 3635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:37:53.390462 systemd-networkd[1685]: vxlan.calico: Gained IPv6LL Oct 8 19:37:53.768247 containerd[2143]: time="2024-10-08T19:37:53.767316507Z" level=info msg="StopPodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\"" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.870 [INFO][4924] k8s.go 608: Cleaning up netns ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.870 [INFO][4924] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" iface="eth0" netns="/var/run/netns/cni-cd4a9352-373e-c50e-0b31-976bcf7e9b34" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.871 [INFO][4924] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" iface="eth0" netns="/var/run/netns/cni-cd4a9352-373e-c50e-0b31-976bcf7e9b34" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.872 [INFO][4924] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" iface="eth0" netns="/var/run/netns/cni-cd4a9352-373e-c50e-0b31-976bcf7e9b34" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.872 [INFO][4924] k8s.go 615: Releasing IP address(es) ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.872 [INFO][4924] utils.go 188: Calico CNI releasing IP address ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.991 [INFO][4931] ipam_plugin.go 417: Releasing address using handleID ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.996 [INFO][4931] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:53.997 [INFO][4931] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:54.011 [WARNING][4931] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:54.012 [INFO][4931] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:54.015 [INFO][4931] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:54.023654 containerd[2143]: 2024-10-08 19:37:54.020 [INFO][4924] k8s.go 621: Teardown processing complete. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:37:54.031542 containerd[2143]: time="2024-10-08T19:37:54.024207442Z" level=info msg="TearDown network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" successfully" Oct 8 19:37:54.031542 containerd[2143]: time="2024-10-08T19:37:54.024284852Z" level=info msg="StopPodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" returns successfully" Oct 8 19:37:54.031542 containerd[2143]: time="2024-10-08T19:37:54.026682284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668d4c9556-nvw6q,Uid:8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50,Namespace:calico-system,Attempt:1,}" Oct 8 19:37:54.034090 systemd[1]: run-netns-cni\x2dcd4a9352\x2d373e\x2dc50e\x2d0b31\x2d976bcf7e9b34.mount: Deactivated successfully. Oct 8 19:37:54.289625 systemd-networkd[1685]: caliedcfe903b65: Link UP Oct 8 19:37:54.291906 (udev-worker)[4829]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:54.292420 systemd-networkd[1685]: caliedcfe903b65: Gained carrier Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.139 [INFO][4939] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0 calico-kube-controllers-668d4c9556- calico-system 8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50 717 0 2024-10-08 19:37:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:668d4c9556 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-52 calico-kube-controllers-668d4c9556-nvw6q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliedcfe903b65 [] []}} ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.140 [INFO][4939] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.202 [INFO][4949] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" HandleID="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.225 [INFO][4949] ipam_plugin.go 270: Auto assigning IP ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" HandleID="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebd40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-52", "pod":"calico-kube-controllers-668d4c9556-nvw6q", "timestamp":"2024-10-08 19:37:54.202524963 +0000 UTC"}, Hostname:"ip-172-31-17-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.225 [INFO][4949] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.225 [INFO][4949] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.225 [INFO][4949] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-52' Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.229 [INFO][4949] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.238 [INFO][4949] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.248 [INFO][4949] ipam.go 489: Trying affinity for 192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.252 [INFO][4949] ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.257 [INFO][4949] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.257 [INFO][4949] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.260 [INFO][4949] ipam.go 1685: Creating new handle: k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17 Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.266 [INFO][4949] ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.277 [INFO][4949] ipam.go 1216: Successfully claimed IPs: [192.168.107.129/26] block=192.168.107.128/26 handle="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.277 [INFO][4949] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.129/26] handle="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" host="ip-172-31-17-52" Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.277 [INFO][4949] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:54.325520 containerd[2143]: 2024-10-08 19:37:54.277 [INFO][4949] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.107.129/26] IPv6=[] ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" HandleID="k8s-pod-network.6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.331237 containerd[2143]: 2024-10-08 19:37:54.282 [INFO][4939] k8s.go 386: Populated endpoint ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0", GenerateName:"calico-kube-controllers-668d4c9556-", Namespace:"calico-system", SelfLink:"", UID:"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668d4c9556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"", Pod:"calico-kube-controllers-668d4c9556-nvw6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedcfe903b65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:54.331237 containerd[2143]: 2024-10-08 19:37:54.282 [INFO][4939] k8s.go 387: Calico CNI using IPs: [192.168.107.129/32] ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.331237 containerd[2143]: 2024-10-08 19:37:54.282 [INFO][4939] dataplane_linux.go 68: Setting the host side veth name to caliedcfe903b65 ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.331237 containerd[2143]: 2024-10-08 19:37:54.292 [INFO][4939] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.331237 containerd[2143]: 2024-10-08 19:37:54.294 [INFO][4939] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0", GenerateName:"calico-kube-controllers-668d4c9556-", Namespace:"calico-system", SelfLink:"", UID:"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668d4c9556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17", Pod:"calico-kube-controllers-668d4c9556-nvw6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedcfe903b65", MAC:"ce:47:5f:8e:3d:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:54.331237 containerd[2143]: 2024-10-08 19:37:54.308 [INFO][4939] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17" Namespace="calico-system" Pod="calico-kube-controllers-668d4c9556-nvw6q" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:37:54.368861 containerd[2143]: time="2024-10-08T19:37:54.368528952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:54.368861 containerd[2143]: time="2024-10-08T19:37:54.368631033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:54.369953 containerd[2143]: time="2024-10-08T19:37:54.368692898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:54.369953 containerd[2143]: time="2024-10-08T19:37:54.369822925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:54.478586 containerd[2143]: time="2024-10-08T19:37:54.478515096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668d4c9556-nvw6q,Uid:8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17\"" Oct 8 19:37:54.483287 containerd[2143]: time="2024-10-08T19:37:54.483210207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:37:54.771767 containerd[2143]: time="2024-10-08T19:37:54.770461567Z" level=info msg="StopPodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\"" Oct 8 19:37:54.771767 containerd[2143]: time="2024-10-08T19:37:54.771501207Z" level=info msg="StopPodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\"" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:54.948 [INFO][5032] k8s.go 608: Cleaning up netns ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:54.949 [INFO][5032] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" iface="eth0" netns="/var/run/netns/cni-3ae65bb9-39ac-0320-7256-9578fcfa85cb" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:54.952 [INFO][5032] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" iface="eth0" netns="/var/run/netns/cni-3ae65bb9-39ac-0320-7256-9578fcfa85cb" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:54.965 [INFO][5032] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" iface="eth0" netns="/var/run/netns/cni-3ae65bb9-39ac-0320-7256-9578fcfa85cb" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:54.973 [INFO][5032] k8s.go 615: Releasing IP address(es) ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:54.973 [INFO][5032] utils.go 188: Calico CNI releasing IP address ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.075 [INFO][5047] ipam_plugin.go 417: Releasing address using handleID ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.075 [INFO][5047] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.076 [INFO][5047] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.117 [WARNING][5047] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.118 [INFO][5047] ipam_plugin.go 445: Releasing address using workloadID ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.163 [INFO][5047] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:55.182205 containerd[2143]: 2024-10-08 19:37:55.169 [INFO][5032] k8s.go 621: Teardown processing complete. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:37:55.190993 containerd[2143]: time="2024-10-08T19:37:55.190937985Z" level=info msg="TearDown network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" successfully" Oct 8 19:37:55.194110 containerd[2143]: time="2024-10-08T19:37:55.192186404Z" level=info msg="StopPodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" returns successfully" Oct 8 19:37:55.194510 containerd[2143]: time="2024-10-08T19:37:55.194253246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ns6ms,Uid:855c6ee6-b78c-4f49-b63e-5d329126787e,Namespace:kube-system,Attempt:1,}" Oct 8 19:37:55.199280 systemd[1]: run-netns-cni\x2d3ae65bb9\x2d39ac\x2d0320\x2d7256\x2d9578fcfa85cb.mount: Deactivated successfully. Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:54.968 [INFO][5038] k8s.go 608: Cleaning up netns ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:54.971 [INFO][5038] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" iface="eth0" netns="/var/run/netns/cni-ec1cdf94-2c7a-b58d-b7c6-0b53558f92c2" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:54.975 [INFO][5038] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" iface="eth0" netns="/var/run/netns/cni-ec1cdf94-2c7a-b58d-b7c6-0b53558f92c2" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:54.977 [INFO][5038] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" iface="eth0" netns="/var/run/netns/cni-ec1cdf94-2c7a-b58d-b7c6-0b53558f92c2" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:54.977 [INFO][5038] k8s.go 615: Releasing IP address(es) ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:54.977 [INFO][5038] utils.go 188: Calico CNI releasing IP address ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.093 [INFO][5048] ipam_plugin.go 417: Releasing address using handleID ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.095 [INFO][5048] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.162 [INFO][5048] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.256 [WARNING][5048] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.256 [INFO][5048] ipam_plugin.go 445: Releasing address using workloadID ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.270 [INFO][5048] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:55.296237 containerd[2143]: 2024-10-08 19:37:55.285 [INFO][5038] k8s.go 621: Teardown processing complete. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:37:55.313335 systemd[1]: run-netns-cni\x2dec1cdf94\x2d2c7a\x2db58d\x2db7c6\x2d0b53558f92c2.mount: Deactivated successfully. Oct 8 19:37:55.327711 containerd[2143]: time="2024-10-08T19:37:55.313736623Z" level=info msg="TearDown network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" successfully" Oct 8 19:37:55.327711 containerd[2143]: time="2024-10-08T19:37:55.313784695Z" level=info msg="StopPodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" returns successfully" Oct 8 19:37:55.327711 containerd[2143]: time="2024-10-08T19:37:55.322154972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9xvcf,Uid:887efa35-ebd2-44fd-a538-5709c92fc6d0,Namespace:kube-system,Attempt:1,}" Oct 8 19:37:55.980205 systemd-networkd[1685]: cali2be23deb166: Link UP Oct 8 19:37:55.981917 systemd-networkd[1685]: cali2be23deb166: Gained carrier Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.796 [INFO][5064] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0 coredns-76f75df574- kube-system 855c6ee6-b78c-4f49-b63e-5d329126787e 726 0 2024-10-08 19:37:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-52 coredns-76f75df574-ns6ms eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2be23deb166 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.796 [INFO][5064] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.888 [INFO][5086] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" HandleID="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.915 [INFO][5086] ipam_plugin.go 270: Auto assigning IP ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" HandleID="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-52", "pod":"coredns-76f75df574-ns6ms", "timestamp":"2024-10-08 19:37:55.888951737 +0000 UTC"}, Hostname:"ip-172-31-17-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.915 [INFO][5086] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.915 [INFO][5086] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.915 [INFO][5086] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-52' Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.918 [INFO][5086] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.925 [INFO][5086] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.934 [INFO][5086] ipam.go 489: Trying affinity for 192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.937 [INFO][5086] ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.942 [INFO][5086] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.942 [INFO][5086] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.944 [INFO][5086] ipam.go 1685: Creating new handle: k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7 Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.954 [INFO][5086] ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.963 [INFO][5086] ipam.go 1216: Successfully claimed IPs: [192.168.107.130/26] block=192.168.107.128/26 handle="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.963 [INFO][5086] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.130/26] handle="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" host="ip-172-31-17-52" Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.964 [INFO][5086] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:56.036261 containerd[2143]: 2024-10-08 19:37:55.964 [INFO][5086] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.107.130/26] IPv6=[] ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" HandleID="k8s-pod-network.f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.047694 containerd[2143]: 2024-10-08 19:37:55.969 [INFO][5064] k8s.go 386: Populated endpoint ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"855c6ee6-b78c-4f49-b63e-5d329126787e", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"", Pod:"coredns-76f75df574-ns6ms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2be23deb166", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:56.047694 containerd[2143]: 2024-10-08 19:37:55.970 [INFO][5064] k8s.go 387: Calico CNI using IPs: [192.168.107.130/32] ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.047694 containerd[2143]: 2024-10-08 19:37:55.970 [INFO][5064] dataplane_linux.go 68: Setting the host side veth name to cali2be23deb166 ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.047694 containerd[2143]: 2024-10-08 19:37:55.985 [INFO][5064] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.047694 containerd[2143]: 2024-10-08 19:37:55.988 [INFO][5064] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"855c6ee6-b78c-4f49-b63e-5d329126787e", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7", Pod:"coredns-76f75df574-ns6ms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2be23deb166", MAC:"22:6c:24:42:88:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:56.047694 containerd[2143]: 2024-10-08 19:37:56.023 [INFO][5064] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7" Namespace="kube-system" Pod="coredns-76f75df574-ns6ms" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:37:56.139710 systemd-networkd[1685]: califfddaad8c38: Link UP Oct 8 19:37:56.141645 systemd-networkd[1685]: califfddaad8c38: Gained carrier Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.821 [INFO][5073] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0 coredns-76f75df574- kube-system 887efa35-ebd2-44fd-a538-5709c92fc6d0 729 0 2024-10-08 19:37:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-52 coredns-76f75df574-9xvcf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califfddaad8c38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.822 [INFO][5073] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.922 [INFO][5090] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" HandleID="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.941 [INFO][5090] ipam_plugin.go 270: Auto assigning IP ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" HandleID="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000115ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-52", "pod":"coredns-76f75df574-9xvcf", "timestamp":"2024-10-08 19:37:55.922041768 +0000 UTC"}, Hostname:"ip-172-31-17-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.942 [INFO][5090] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.964 [INFO][5090] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.964 [INFO][5090] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-52' Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.967 [INFO][5090] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:55.987 [INFO][5090] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.008 [INFO][5090] ipam.go 489: Trying affinity for 192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.023 [INFO][5090] ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.055 [INFO][5090] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.055 [INFO][5090] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.064 [INFO][5090] ipam.go 1685: Creating new handle: k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.081 [INFO][5090] ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.105 [INFO][5090] ipam.go 1216: Successfully claimed IPs: [192.168.107.131/26] block=192.168.107.128/26 handle="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.105 [INFO][5090] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.131/26] handle="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" host="ip-172-31-17-52" Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.105 [INFO][5090] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:56.207814 containerd[2143]: 2024-10-08 19:37:56.105 [INFO][5090] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.107.131/26] IPv6=[] ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" HandleID="k8s-pod-network.fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.210740 containerd[2143]: 2024-10-08 19:37:56.127 [INFO][5073] k8s.go 386: Populated endpoint ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"887efa35-ebd2-44fd-a538-5709c92fc6d0", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"", Pod:"coredns-76f75df574-9xvcf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfddaad8c38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:56.210740 containerd[2143]: 2024-10-08 19:37:56.127 [INFO][5073] k8s.go 387: Calico CNI using IPs: [192.168.107.131/32] ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.210740 containerd[2143]: 2024-10-08 19:37:56.127 [INFO][5073] dataplane_linux.go 68: Setting the host side veth name to califfddaad8c38 ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.210740 containerd[2143]: 2024-10-08 19:37:56.141 [INFO][5073] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.210740 containerd[2143]: 2024-10-08 19:37:56.148 [INFO][5073] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"887efa35-ebd2-44fd-a538-5709c92fc6d0", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f", Pod:"coredns-76f75df574-9xvcf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfddaad8c38", MAC:"6e:f4:26:e4:3e:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:56.210740 containerd[2143]: 2024-10-08 19:37:56.179 [INFO][5073] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f" Namespace="kube-system" Pod="coredns-76f75df574-9xvcf" WorkloadEndpoint="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:37:56.240017 containerd[2143]: time="2024-10-08T19:37:56.239232118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:56.244674 containerd[2143]: time="2024-10-08T19:37:56.239495831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:56.244674 containerd[2143]: time="2024-10-08T19:37:56.239648034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:56.244674 containerd[2143]: time="2024-10-08T19:37:56.240780568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:56.271519 systemd-networkd[1685]: caliedcfe903b65: Gained IPv6LL Oct 8 19:37:56.370004 containerd[2143]: time="2024-10-08T19:37:56.366995661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:56.370004 containerd[2143]: time="2024-10-08T19:37:56.369836908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:56.370004 containerd[2143]: time="2024-10-08T19:37:56.369873322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:56.373555 containerd[2143]: time="2024-10-08T19:37:56.370992170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:56.512937 systemd[1]: Started sshd@7-172.31.17.52:22-139.178.68.195:41608.service - OpenSSH per-connection server daemon (139.178.68.195:41608). Oct 8 19:37:56.520767 containerd[2143]: time="2024-10-08T19:37:56.520220499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ns6ms,Uid:855c6ee6-b78c-4f49-b63e-5d329126787e,Namespace:kube-system,Attempt:1,} returns sandbox id \"f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7\"" Oct 8 19:37:56.556521 containerd[2143]: time="2024-10-08T19:37:56.555581606Z" level=info msg="CreateContainer within sandbox \"f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:37:56.605779 containerd[2143]: time="2024-10-08T19:37:56.605720879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9xvcf,Uid:887efa35-ebd2-44fd-a538-5709c92fc6d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f\"" Oct 8 19:37:56.625061 containerd[2143]: time="2024-10-08T19:37:56.624720995Z" level=info msg="CreateContainer within sandbox \"f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6ef633cdfba2cce33b9709ed5bde81627d5427151af6406488043ea97f9d5f6\"" Oct 8 19:37:56.627163 containerd[2143]: time="2024-10-08T19:37:56.626678224Z" level=info msg="CreateContainer within sandbox \"fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:37:56.631521 containerd[2143]: time="2024-10-08T19:37:56.631472561Z" level=info msg="StartContainer for \"d6ef633cdfba2cce33b9709ed5bde81627d5427151af6406488043ea97f9d5f6\"" Oct 8 19:37:56.665697 containerd[2143]: time="2024-10-08T19:37:56.665617920Z" level=info msg="CreateContainer within sandbox \"fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2a9517b53e6d68e049e911857a39955dd310bbe4ccaad1ecdbbdc9ac4a9f25c\"" Oct 8 19:37:56.670202 containerd[2143]: time="2024-10-08T19:37:56.670013853Z" level=info msg="StartContainer for \"a2a9517b53e6d68e049e911857a39955dd310bbe4ccaad1ecdbbdc9ac4a9f25c\"" Oct 8 19:37:56.744785 sshd[5207]: Accepted publickey for core from 139.178.68.195 port 41608 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:56.751673 sshd[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:56.790122 containerd[2143]: time="2024-10-08T19:37:56.783567648Z" level=info msg="StopPodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\"" Oct 8 19:37:56.795751 systemd-logind[2113]: New session 8 of user core. Oct 8 19:37:56.806771 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:37:56.976903 containerd[2143]: time="2024-10-08T19:37:56.974533154Z" level=info msg="StartContainer for \"d6ef633cdfba2cce33b9709ed5bde81627d5427151af6406488043ea97f9d5f6\" returns successfully" Oct 8 19:37:57.017602 containerd[2143]: time="2024-10-08T19:37:57.016121246Z" level=info msg="StartContainer for \"a2a9517b53e6d68e049e911857a39955dd310bbe4ccaad1ecdbbdc9ac4a9f25c\" returns successfully" Oct 8 19:37:57.038623 systemd-networkd[1685]: cali2be23deb166: Gained IPv6LL Oct 8 19:37:57.305749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056670124.mount: Deactivated successfully. Oct 8 19:37:57.454798 kubelet[3635]: I1008 19:37:57.454673 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9xvcf" podStartSLOduration=36.454605695 podStartE2EDuration="36.454605695s" podCreationTimestamp="2024-10-08 19:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:57.451477612 +0000 UTC m=+50.985335076" watchObservedRunningTime="2024-10-08 19:37:57.454605695 +0000 UTC m=+50.988463207" Oct 8 19:37:57.459487 kubelet[3635]: I1008 19:37:57.454864 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ns6ms" podStartSLOduration=36.45482768 podStartE2EDuration="36.45482768s" podCreationTimestamp="2024-10-08 19:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:57.410159709 +0000 UTC m=+50.944017233" watchObservedRunningTime="2024-10-08 19:37:57.45482768 +0000 UTC m=+50.988685108" Oct 8 19:37:57.529745 sshd[5207]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:57.556440 systemd-networkd[1685]: califfddaad8c38: Gained IPv6LL Oct 8 19:37:57.562417 systemd[1]: sshd@7-172.31.17.52:22-139.178.68.195:41608.service: Deactivated successfully. Oct 8 19:37:57.563700 systemd-logind[2113]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:37:57.571868 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:37:57.577559 systemd-logind[2113]: Removed session 8. Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.473 [INFO][5288] k8s.go 608: Cleaning up netns ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.495 [INFO][5288] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" iface="eth0" netns="/var/run/netns/cni-9d38461d-047b-beb9-c87d-f53e8d6893af" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.500 [INFO][5288] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" iface="eth0" netns="/var/run/netns/cni-9d38461d-047b-beb9-c87d-f53e8d6893af" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.504 [INFO][5288] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" iface="eth0" netns="/var/run/netns/cni-9d38461d-047b-beb9-c87d-f53e8d6893af" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.504 [INFO][5288] k8s.go 615: Releasing IP address(es) ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.504 [INFO][5288] utils.go 188: Calico CNI releasing IP address ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.667 [INFO][5352] ipam_plugin.go 417: Releasing address using handleID ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.668 [INFO][5352] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.668 [INFO][5352] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.681 [WARNING][5352] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.681 [INFO][5352] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.685 [INFO][5352] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:57.694585 containerd[2143]: 2024-10-08 19:37:57.689 [INFO][5288] k8s.go 621: Teardown processing complete. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:37:57.699713 containerd[2143]: time="2024-10-08T19:37:57.698396664Z" level=info msg="TearDown network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" successfully" Oct 8 19:37:57.699713 containerd[2143]: time="2024-10-08T19:37:57.698445443Z" level=info msg="StopPodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" returns successfully" Oct 8 19:37:57.702533 systemd[1]: run-netns-cni\x2d9d38461d\x2d047b\x2dbeb9\x2dc87d\x2df53e8d6893af.mount: Deactivated successfully. Oct 8 19:37:57.704871 containerd[2143]: time="2024-10-08T19:37:57.704283775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rjnd,Uid:16c955fe-656e-4e38-9e91-8bafa8ad1d2f,Namespace:calico-system,Attempt:1,}" Oct 8 19:37:58.049015 systemd-networkd[1685]: cali7a4e39b9891: Link UP Oct 8 19:37:58.055170 systemd-networkd[1685]: cali7a4e39b9891: Gained carrier Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.851 [INFO][5368] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0 csi-node-driver- calico-system 16c955fe-656e-4e38-9e91-8bafa8ad1d2f 789 0 2024-10-08 19:37:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-17-52 csi-node-driver-7rjnd eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali7a4e39b9891 [] []}} ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.851 [INFO][5368] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.958 [INFO][5375] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" HandleID="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.978 [INFO][5375] ipam_plugin.go 270: Auto assigning IP ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" HandleID="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a210), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-52", "pod":"csi-node-driver-7rjnd", "timestamp":"2024-10-08 19:37:57.958921936 +0000 UTC"}, Hostname:"ip-172-31-17-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.979 [INFO][5375] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.980 [INFO][5375] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.980 [INFO][5375] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-52' Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.984 [INFO][5375] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:57.992 [INFO][5375] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.005 [INFO][5375] ipam.go 489: Trying affinity for 192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.009 [INFO][5375] ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.014 [INFO][5375] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.014 [INFO][5375] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.016 [INFO][5375] ipam.go 1685: Creating new handle: k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.025 [INFO][5375] ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.034 [INFO][5375] ipam.go 1216: Successfully claimed IPs: [192.168.107.132/26] block=192.168.107.128/26 handle="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.034 [INFO][5375] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.132/26] handle="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" host="ip-172-31-17-52" Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.034 [INFO][5375] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:58.120151 containerd[2143]: 2024-10-08 19:37:58.034 [INFO][5375] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.107.132/26] IPv6=[] ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" HandleID="k8s-pod-network.76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.121879 containerd[2143]: 2024-10-08 19:37:58.043 [INFO][5368] k8s.go 386: Populated endpoint ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16c955fe-656e-4e38-9e91-8bafa8ad1d2f", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"", Pod:"csi-node-driver-7rjnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a4e39b9891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:58.121879 containerd[2143]: 2024-10-08 19:37:58.044 [INFO][5368] k8s.go 387: Calico CNI using IPs: [192.168.107.132/32] ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.121879 containerd[2143]: 2024-10-08 19:37:58.044 [INFO][5368] dataplane_linux.go 68: Setting the host side veth name to cali7a4e39b9891 ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.121879 containerd[2143]: 2024-10-08 19:37:58.053 [INFO][5368] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.121879 containerd[2143]: 2024-10-08 19:37:58.060 [INFO][5368] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16c955fe-656e-4e38-9e91-8bafa8ad1d2f", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a", Pod:"csi-node-driver-7rjnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a4e39b9891", MAC:"d2:56:fa:9f:8a:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:58.121879 containerd[2143]: 2024-10-08 19:37:58.099 [INFO][5368] k8s.go 500: Wrote updated endpoint to datastore ContainerID="76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a" Namespace="calico-system" Pod="csi-node-driver-7rjnd" WorkloadEndpoint="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:37:58.204942 containerd[2143]: time="2024-10-08T19:37:58.204419915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:58.204942 containerd[2143]: time="2024-10-08T19:37:58.204508179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:58.204942 containerd[2143]: time="2024-10-08T19:37:58.204564730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:58.204942 containerd[2143]: time="2024-10-08T19:37:58.204801840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:58.336734 containerd[2143]: time="2024-10-08T19:37:58.336538143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rjnd,Uid:16c955fe-656e-4e38-9e91-8bafa8ad1d2f,Namespace:calico-system,Attempt:1,} returns sandbox id \"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a\"" Oct 8 19:37:58.773395 containerd[2143]: time="2024-10-08T19:37:58.773318197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:58.780357 containerd[2143]: time="2024-10-08T19:37:58.780244252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:37:58.788854 containerd[2143]: time="2024-10-08T19:37:58.788748886Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:58.796324 containerd[2143]: time="2024-10-08T19:37:58.796181484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:58.798766 containerd[2143]: time="2024-10-08T19:37:58.798389723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 4.315096589s" Oct 8 19:37:58.798766 containerd[2143]: time="2024-10-08T19:37:58.798516020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:37:58.799953 containerd[2143]: time="2024-10-08T19:37:58.799605567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:37:58.837297 containerd[2143]: time="2024-10-08T19:37:58.837233959Z" level=info msg="CreateContainer within sandbox \"6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:37:58.864788 containerd[2143]: time="2024-10-08T19:37:58.863226625Z" level=info msg="CreateContainer within sandbox \"6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"aeeefb5438c0a30c040a05a6261330264ba50719852efcc3b7a98d52c74f4052\"" Oct 8 19:37:58.868096 containerd[2143]: time="2024-10-08T19:37:58.865926966Z" level=info msg="StartContainer for \"aeeefb5438c0a30c040a05a6261330264ba50719852efcc3b7a98d52c74f4052\"" Oct 8 19:37:58.994978 containerd[2143]: time="2024-10-08T19:37:58.994870778Z" level=info msg="StartContainer for \"aeeefb5438c0a30c040a05a6261330264ba50719852efcc3b7a98d52c74f4052\" returns successfully" Oct 8 19:37:59.470518 systemd-networkd[1685]: cali7a4e39b9891: Gained IPv6LL Oct 8 19:37:59.566644 kubelet[3635]: I1008 19:37:59.566572 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-668d4c9556-nvw6q" podStartSLOduration=25.249639272 podStartE2EDuration="29.56647252s" podCreationTimestamp="2024-10-08 19:37:30 +0000 UTC" firstStartedPulling="2024-10-08 19:37:54.482291695 +0000 UTC m=+48.016149123" lastFinishedPulling="2024-10-08 19:37:58.799124943 +0000 UTC m=+52.332982371" observedRunningTime="2024-10-08 19:37:59.419966121 +0000 UTC m=+52.953823573" watchObservedRunningTime="2024-10-08 19:37:59.56647252 +0000 UTC m=+53.100329948" Oct 8 19:38:00.101613 containerd[2143]: time="2024-10-08T19:38:00.101558001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:00.104194 containerd[2143]: time="2024-10-08T19:38:00.104109880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:38:00.104779 containerd[2143]: time="2024-10-08T19:38:00.104724981Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:00.110829 containerd[2143]: time="2024-10-08T19:38:00.110748427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:00.114069 containerd[2143]: time="2024-10-08T19:38:00.111883492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.312218986s" Oct 8 19:38:00.114069 containerd[2143]: time="2024-10-08T19:38:00.111951510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:38:00.117139 containerd[2143]: time="2024-10-08T19:38:00.117088180Z" level=info msg="CreateContainer within sandbox \"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:38:00.144577 containerd[2143]: time="2024-10-08T19:38:00.144256521Z" level=info msg="CreateContainer within sandbox \"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4de906d2949a2d4c536628aa9018076430e2d065830a627cc069b39297e1d0cf\"" Oct 8 19:38:00.146924 containerd[2143]: time="2024-10-08T19:38:00.146877954Z" level=info msg="StartContainer for \"4de906d2949a2d4c536628aa9018076430e2d065830a627cc069b39297e1d0cf\"" Oct 8 19:38:00.504545 containerd[2143]: time="2024-10-08T19:38:00.504448312Z" level=info msg="StartContainer for \"4de906d2949a2d4c536628aa9018076430e2d065830a627cc069b39297e1d0cf\" returns successfully" Oct 8 19:38:00.508784 containerd[2143]: time="2024-10-08T19:38:00.508722158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:38:01.624666 ntpd[2094]: Listen normally on 6 vxlan.calico 192.168.107.128:123 Oct 8 19:38:01.631731 ntpd[2094]: 8 Oct 19:38:01 ntpd[2094]: Listen normally on 6 vxlan.calico 192.168.107.128:123 Oct 8 19:38:01.631731 ntpd[2094]: 8 Oct 19:38:01 ntpd[2094]: Listen normally on 7 vxlan.calico [fe80::6423:a9ff:fed9:1939%4]:123 Oct 8 19:38:01.631731 ntpd[2094]: 8 Oct 19:38:01 ntpd[2094]: Listen normally on 8 caliedcfe903b65 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 8 19:38:01.631731 ntpd[2094]: 8 Oct 19:38:01 ntpd[2094]: Listen normally on 9 cali2be23deb166 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 8 19:38:01.631731 ntpd[2094]: 8 Oct 19:38:01 ntpd[2094]: Listen normally on 10 califfddaad8c38 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:38:01.631731 ntpd[2094]: 8 Oct 19:38:01 ntpd[2094]: Listen normally on 11 cali7a4e39b9891 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:38:01.624833 ntpd[2094]: Listen normally on 7 vxlan.calico [fe80::6423:a9ff:fed9:1939%4]:123 Oct 8 19:38:01.624923 ntpd[2094]: Listen normally on 8 caliedcfe903b65 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 8 19:38:01.625002 ntpd[2094]: Listen normally on 9 cali2be23deb166 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 8 19:38:01.625100 ntpd[2094]: Listen normally on 10 califfddaad8c38 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:38:01.625171 ntpd[2094]: Listen normally on 11 cali7a4e39b9891 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:38:02.242654 containerd[2143]: time="2024-10-08T19:38:02.242247871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:02.246078 containerd[2143]: time="2024-10-08T19:38:02.244834413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:38:02.251378 containerd[2143]: time="2024-10-08T19:38:02.251308342Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:02.257734 containerd[2143]: time="2024-10-08T19:38:02.257649918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:02.263573 containerd[2143]: time="2024-10-08T19:38:02.263338960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.75454858s" Oct 8 19:38:02.263573 containerd[2143]: time="2024-10-08T19:38:02.263420831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:38:02.270595 containerd[2143]: time="2024-10-08T19:38:02.269896188Z" level=info msg="CreateContainer within sandbox \"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:38:02.314112 containerd[2143]: time="2024-10-08T19:38:02.313976201Z" level=info msg="CreateContainer within sandbox \"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9030c62fca3c7c3adecf798ade9f005ebcad5da0472ceecb7c62e789f4ff4414\"" Oct 8 19:38:02.315149 containerd[2143]: time="2024-10-08T19:38:02.314959733Z" level=info msg="StartContainer for \"9030c62fca3c7c3adecf798ade9f005ebcad5da0472ceecb7c62e789f4ff4414\"" Oct 8 19:38:02.528489 containerd[2143]: time="2024-10-08T19:38:02.528286613Z" level=info msg="StartContainer for \"9030c62fca3c7c3adecf798ade9f005ebcad5da0472ceecb7c62e789f4ff4414\" returns successfully" Oct 8 19:38:02.572053 systemd[1]: Started sshd@8-172.31.17.52:22-139.178.68.195:37368.service - OpenSSH per-connection server daemon (139.178.68.195:37368). Oct 8 19:38:02.777737 sshd[5583]: Accepted publickey for core from 139.178.68.195 port 37368 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:02.786840 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:02.809610 systemd-logind[2113]: New session 9 of user core. Oct 8 19:38:02.820775 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:38:02.969615 kubelet[3635]: I1008 19:38:02.969523 3635 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:38:02.969615 kubelet[3635]: I1008 19:38:02.969632 3635 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:38:03.185468 sshd[5583]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:03.198726 systemd[1]: sshd@8-172.31.17.52:22-139.178.68.195:37368.service: Deactivated successfully. Oct 8 19:38:03.200638 systemd-logind[2113]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:38:03.215087 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:38:03.220851 systemd-logind[2113]: Removed session 9. Oct 8 19:38:06.714381 containerd[2143]: time="2024-10-08T19:38:06.714216661Z" level=info msg="StopPodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\"" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.820 [WARNING][5617] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"855c6ee6-b78c-4f49-b63e-5d329126787e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7", Pod:"coredns-76f75df574-ns6ms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2be23deb166", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.820 [INFO][5617] k8s.go 608: Cleaning up netns ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.820 [INFO][5617] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" iface="eth0" netns="" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.820 [INFO][5617] k8s.go 615: Releasing IP address(es) ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.820 [INFO][5617] utils.go 188: Calico CNI releasing IP address ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.877 [INFO][5626] ipam_plugin.go 417: Releasing address using handleID ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.878 [INFO][5626] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.879 [INFO][5626] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.913 [WARNING][5626] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.915 [INFO][5626] ipam_plugin.go 445: Releasing address using workloadID ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.921 [INFO][5626] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:06.933458 containerd[2143]: 2024-10-08 19:38:06.924 [INFO][5617] k8s.go 621: Teardown processing complete. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:06.933458 containerd[2143]: time="2024-10-08T19:38:06.932852697Z" level=info msg="TearDown network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" successfully" Oct 8 19:38:06.933458 containerd[2143]: time="2024-10-08T19:38:06.932890394Z" level=info msg="StopPodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" returns successfully" Oct 8 19:38:06.941055 containerd[2143]: time="2024-10-08T19:38:06.937663287Z" level=info msg="RemovePodSandbox for \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\"" Oct 8 19:38:06.941055 containerd[2143]: time="2024-10-08T19:38:06.937726519Z" level=info msg="Forcibly stopping sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\"" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.016 [WARNING][5645] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"855c6ee6-b78c-4f49-b63e-5d329126787e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"f07ae7b564775a1dee96b71b6c2482ae9c31fe35b5da082d80d26b2f39ff9cb7", Pod:"coredns-76f75df574-ns6ms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2be23deb166", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.017 [INFO][5645] k8s.go 608: Cleaning up netns ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.017 [INFO][5645] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" iface="eth0" netns="" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.017 [INFO][5645] k8s.go 615: Releasing IP address(es) ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.017 [INFO][5645] utils.go 188: Calico CNI releasing IP address ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.063 [INFO][5652] ipam_plugin.go 417: Releasing address using handleID ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.063 [INFO][5652] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.063 [INFO][5652] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.079 [WARNING][5652] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.079 [INFO][5652] ipam_plugin.go 445: Releasing address using workloadID ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" HandleID="k8s-pod-network.80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--ns6ms-eth0" Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.081 [INFO][5652] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:07.087471 containerd[2143]: 2024-10-08 19:38:07.084 [INFO][5645] k8s.go 621: Teardown processing complete. ContainerID="80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae" Oct 8 19:38:07.087471 containerd[2143]: time="2024-10-08T19:38:07.087412863Z" level=info msg="TearDown network for sandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" successfully" Oct 8 19:38:07.093497 containerd[2143]: time="2024-10-08T19:38:07.093429617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:38:07.093645 containerd[2143]: time="2024-10-08T19:38:07.093554978Z" level=info msg="RemovePodSandbox \"80a5cdf30e33a192a962d87a5a7ce23ab9ec253ada84bbd76a7332b701670cae\" returns successfully" Oct 8 19:38:07.094353 containerd[2143]: time="2024-10-08T19:38:07.094291830Z" level=info msg="StopPodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\"" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.168 [WARNING][5670] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16c955fe-656e-4e38-9e91-8bafa8ad1d2f", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a", Pod:"csi-node-driver-7rjnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a4e39b9891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.169 [INFO][5670] k8s.go 608: Cleaning up netns ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.169 [INFO][5670] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" iface="eth0" netns="" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.169 [INFO][5670] k8s.go 615: Releasing IP address(es) ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.169 [INFO][5670] utils.go 188: Calico CNI releasing IP address ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.217 [INFO][5676] ipam_plugin.go 417: Releasing address using handleID ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.217 [INFO][5676] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.218 [INFO][5676] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.230 [WARNING][5676] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.230 [INFO][5676] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.232 [INFO][5676] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:07.237936 containerd[2143]: 2024-10-08 19:38:07.235 [INFO][5670] k8s.go 621: Teardown processing complete. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.240149 containerd[2143]: time="2024-10-08T19:38:07.237981557Z" level=info msg="TearDown network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" successfully" Oct 8 19:38:07.240149 containerd[2143]: time="2024-10-08T19:38:07.238022697Z" level=info msg="StopPodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" returns successfully" Oct 8 19:38:07.240149 containerd[2143]: time="2024-10-08T19:38:07.239831752Z" level=info msg="RemovePodSandbox for \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\"" Oct 8 19:38:07.240149 containerd[2143]: time="2024-10-08T19:38:07.239906882Z" level=info msg="Forcibly stopping sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\"" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.323 [WARNING][5694] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16c955fe-656e-4e38-9e91-8bafa8ad1d2f", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"76ca04453e137c6338b0b0022595e8953b6ddd9ab62962a0195e0dd67871cc8a", Pod:"csi-node-driver-7rjnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a4e39b9891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.324 [INFO][5694] k8s.go 608: Cleaning up netns ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.324 [INFO][5694] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" iface="eth0" netns="" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.324 [INFO][5694] k8s.go 615: Releasing IP address(es) ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.324 [INFO][5694] utils.go 188: Calico CNI releasing IP address ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.370 [INFO][5700] ipam_plugin.go 417: Releasing address using handleID ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.370 [INFO][5700] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.370 [INFO][5700] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.383 [WARNING][5700] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.383 [INFO][5700] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" HandleID="k8s-pod-network.b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Workload="ip--172--31--17--52-k8s-csi--node--driver--7rjnd-eth0" Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.386 [INFO][5700] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:07.393632 containerd[2143]: 2024-10-08 19:38:07.390 [INFO][5694] k8s.go 621: Teardown processing complete. ContainerID="b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2" Oct 8 19:38:07.394728 containerd[2143]: time="2024-10-08T19:38:07.393590394Z" level=info msg="TearDown network for sandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" successfully" Oct 8 19:38:07.401379 containerd[2143]: time="2024-10-08T19:38:07.401309553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:38:07.401532 containerd[2143]: time="2024-10-08T19:38:07.401405457Z" level=info msg="RemovePodSandbox \"b1f7d625033c490f104b2e3c818b504f8b1d1041f423b1c0d1886c3c0f4116e2\" returns successfully" Oct 8 19:38:07.402475 containerd[2143]: time="2024-10-08T19:38:07.402006704Z" level=info msg="StopPodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\"" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.473 [WARNING][5718] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"887efa35-ebd2-44fd-a538-5709c92fc6d0", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f", Pod:"coredns-76f75df574-9xvcf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfddaad8c38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.475 [INFO][5718] k8s.go 608: Cleaning up netns ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.475 [INFO][5718] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" iface="eth0" netns="" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.475 [INFO][5718] k8s.go 615: Releasing IP address(es) ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.475 [INFO][5718] utils.go 188: Calico CNI releasing IP address ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.515 [INFO][5724] ipam_plugin.go 417: Releasing address using handleID ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.516 [INFO][5724] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.516 [INFO][5724] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.536 [WARNING][5724] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.536 [INFO][5724] ipam_plugin.go 445: Releasing address using workloadID ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.539 [INFO][5724] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:07.546160 containerd[2143]: 2024-10-08 19:38:07.542 [INFO][5718] k8s.go 621: Teardown processing complete. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.548335 containerd[2143]: time="2024-10-08T19:38:07.546238836Z" level=info msg="TearDown network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" successfully" Oct 8 19:38:07.548335 containerd[2143]: time="2024-10-08T19:38:07.546284126Z" level=info msg="StopPodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" returns successfully" Oct 8 19:38:07.548335 containerd[2143]: time="2024-10-08T19:38:07.547363394Z" level=info msg="RemovePodSandbox for \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\"" Oct 8 19:38:07.548335 containerd[2143]: time="2024-10-08T19:38:07.547597109Z" level=info msg="Forcibly stopping sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\"" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.630 [WARNING][5743] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"887efa35-ebd2-44fd-a538-5709c92fc6d0", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"fb93038c2925da0ee7d2ce2dd3188bf0ca3638e5a4fcafe1ebb1e5bd7e42311f", Pod:"coredns-76f75df574-9xvcf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfddaad8c38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.631 [INFO][5743] k8s.go 608: Cleaning up netns ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.631 [INFO][5743] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" iface="eth0" netns="" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.631 [INFO][5743] k8s.go 615: Releasing IP address(es) ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.631 [INFO][5743] utils.go 188: Calico CNI releasing IP address ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.690 [INFO][5750] ipam_plugin.go 417: Releasing address using handleID ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.691 [INFO][5750] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.691 [INFO][5750] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.727 [WARNING][5750] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.728 [INFO][5750] ipam_plugin.go 445: Releasing address using workloadID ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" HandleID="k8s-pod-network.28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Workload="ip--172--31--17--52-k8s-coredns--76f75df574--9xvcf-eth0" Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.745 [INFO][5750] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:07.752637 containerd[2143]: 2024-10-08 19:38:07.748 [INFO][5743] k8s.go 621: Teardown processing complete. ContainerID="28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a" Oct 8 19:38:07.754502 containerd[2143]: time="2024-10-08T19:38:07.752717802Z" level=info msg="TearDown network for sandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" successfully" Oct 8 19:38:07.759232 containerd[2143]: time="2024-10-08T19:38:07.759156637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:38:07.759692 containerd[2143]: time="2024-10-08T19:38:07.759471012Z" level=info msg="RemovePodSandbox \"28eba041d5a348f2a390aeb41a3fc6b765cffb04145111c00e23dfa1a21b830a\" returns successfully" Oct 8 19:38:07.760507 containerd[2143]: time="2024-10-08T19:38:07.760442322Z" level=info msg="StopPodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\"" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.849 [WARNING][5771] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0", GenerateName:"calico-kube-controllers-668d4c9556-", Namespace:"calico-system", SelfLink:"", UID:"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668d4c9556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17", Pod:"calico-kube-controllers-668d4c9556-nvw6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedcfe903b65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.857 [INFO][5771] k8s.go 608: Cleaning up netns ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.857 [INFO][5771] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" iface="eth0" netns="" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.858 [INFO][5771] k8s.go 615: Releasing IP address(es) ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.858 [INFO][5771] utils.go 188: Calico CNI releasing IP address ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.899 [INFO][5778] ipam_plugin.go 417: Releasing address using handleID ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.899 [INFO][5778] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.899 [INFO][5778] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.912 [WARNING][5778] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.912 [INFO][5778] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.914 [INFO][5778] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:07.921093 containerd[2143]: 2024-10-08 19:38:07.917 [INFO][5771] k8s.go 621: Teardown processing complete. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:07.922026 containerd[2143]: time="2024-10-08T19:38:07.921165317Z" level=info msg="TearDown network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" successfully" Oct 8 19:38:07.922026 containerd[2143]: time="2024-10-08T19:38:07.921205149Z" level=info msg="StopPodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" returns successfully" Oct 8 19:38:07.922596 containerd[2143]: time="2024-10-08T19:38:07.922552423Z" level=info msg="RemovePodSandbox for \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\"" Oct 8 19:38:07.922682 containerd[2143]: time="2024-10-08T19:38:07.922605484Z" level=info msg="Forcibly stopping sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\"" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:07.987 [WARNING][5796] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0", GenerateName:"calico-kube-controllers-668d4c9556-", Namespace:"calico-system", SelfLink:"", UID:"8ee8d43b-2d2d-4f3e-a237-2f7ee6280d50", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668d4c9556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"6d845933b1e6568fdfeaa55a52b3a5cd2c8e6ee5a15b76db065e8f0a5cb3fa17", Pod:"calico-kube-controllers-668d4c9556-nvw6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedcfe903b65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:07.988 [INFO][5796] k8s.go 608: Cleaning up netns ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:07.988 [INFO][5796] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" iface="eth0" netns="" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:07.988 [INFO][5796] k8s.go 615: Releasing IP address(es) ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:07.988 [INFO][5796] utils.go 188: Calico CNI releasing IP address ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.028 [INFO][5802] ipam_plugin.go 417: Releasing address using handleID ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.028 [INFO][5802] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.028 [INFO][5802] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.043 [WARNING][5802] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.043 [INFO][5802] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" HandleID="k8s-pod-network.2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Workload="ip--172--31--17--52-k8s-calico--kube--controllers--668d4c9556--nvw6q-eth0" Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.047 [INFO][5802] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:08.053531 containerd[2143]: 2024-10-08 19:38:08.050 [INFO][5796] k8s.go 621: Teardown processing complete. ContainerID="2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12" Oct 8 19:38:08.055597 containerd[2143]: time="2024-10-08T19:38:08.055386549Z" level=info msg="TearDown network for sandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" successfully" Oct 8 19:38:08.061650 containerd[2143]: time="2024-10-08T19:38:08.061362032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:38:08.061650 containerd[2143]: time="2024-10-08T19:38:08.061508766Z" level=info msg="RemovePodSandbox \"2bb82b8cc766b23cba1443999a32af113a1e31ed2d3724e6e866e6b4f80a2d12\" returns successfully" Oct 8 19:38:08.217487 systemd[1]: Started sshd@9-172.31.17.52:22-139.178.68.195:37382.service - OpenSSH per-connection server daemon (139.178.68.195:37382). Oct 8 19:38:08.404798 sshd[5811]: Accepted publickey for core from 139.178.68.195 port 37382 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:08.408111 sshd[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:08.416377 systemd-logind[2113]: New session 10 of user core. Oct 8 19:38:08.422651 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:38:08.688675 sshd[5811]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:08.698982 systemd[1]: sshd@9-172.31.17.52:22-139.178.68.195:37382.service: Deactivated successfully. Oct 8 19:38:08.705914 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:38:08.707801 systemd-logind[2113]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:38:08.714904 systemd-logind[2113]: Removed session 10. Oct 8 19:38:08.720494 systemd[1]: Started sshd@10-172.31.17.52:22-139.178.68.195:37396.service - OpenSSH per-connection server daemon (139.178.68.195:37396). Oct 8 19:38:08.905131 sshd[5826]: Accepted publickey for core from 139.178.68.195 port 37396 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:08.908520 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:08.917978 systemd-logind[2113]: New session 11 of user core. Oct 8 19:38:08.928727 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:38:09.276787 sshd[5826]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:09.296655 systemd-logind[2113]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:38:09.297258 systemd[1]: sshd@10-172.31.17.52:22-139.178.68.195:37396.service: Deactivated successfully. Oct 8 19:38:09.312758 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:38:09.339495 systemd[1]: Started sshd@11-172.31.17.52:22-139.178.68.195:37402.service - OpenSSH per-connection server daemon (139.178.68.195:37402). Oct 8 19:38:09.343325 systemd-logind[2113]: Removed session 11. Oct 8 19:38:09.524505 sshd[5838]: Accepted publickey for core from 139.178.68.195 port 37402 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:09.527447 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:09.537797 systemd-logind[2113]: New session 12 of user core. Oct 8 19:38:09.542610 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:38:09.813338 sshd[5838]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:09.820127 systemd[1]: sshd@11-172.31.17.52:22-139.178.68.195:37402.service: Deactivated successfully. Oct 8 19:38:09.828881 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:38:09.832640 systemd-logind[2113]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:38:09.834411 systemd-logind[2113]: Removed session 12. Oct 8 19:38:14.841544 systemd[1]: Started sshd@12-172.31.17.52:22-139.178.68.195:56906.service - OpenSSH per-connection server daemon (139.178.68.195:56906). Oct 8 19:38:15.023166 sshd[5899]: Accepted publickey for core from 139.178.68.195 port 56906 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:15.025778 sshd[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:15.034130 systemd-logind[2113]: New session 13 of user core. Oct 8 19:38:15.039841 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:38:15.293401 sshd[5899]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:15.301551 systemd[1]: sshd@12-172.31.17.52:22-139.178.68.195:56906.service: Deactivated successfully. Oct 8 19:38:15.310949 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:38:15.313293 systemd-logind[2113]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:38:15.315375 systemd-logind[2113]: Removed session 13. Oct 8 19:38:20.322568 systemd[1]: Started sshd@13-172.31.17.52:22-139.178.68.195:56918.service - OpenSSH per-connection server daemon (139.178.68.195:56918). Oct 8 19:38:20.510274 sshd[5919]: Accepted publickey for core from 139.178.68.195 port 56918 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:20.512873 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:20.529013 systemd-logind[2113]: New session 14 of user core. Oct 8 19:38:20.533949 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:38:20.826295 sshd[5919]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:20.831993 systemd[1]: sshd@13-172.31.17.52:22-139.178.68.195:56918.service: Deactivated successfully. Oct 8 19:38:20.841394 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:38:20.843389 systemd-logind[2113]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:38:20.847880 systemd-logind[2113]: Removed session 14. Oct 8 19:38:25.856998 systemd[1]: Started sshd@14-172.31.17.52:22-139.178.68.195:34414.service - OpenSSH per-connection server daemon (139.178.68.195:34414). Oct 8 19:38:26.042872 sshd[5965]: Accepted publickey for core from 139.178.68.195 port 34414 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:26.046467 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:26.058169 systemd-logind[2113]: New session 15 of user core. Oct 8 19:38:26.071868 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:38:26.344020 sshd[5965]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:26.351850 systemd[1]: sshd@14-172.31.17.52:22-139.178.68.195:34414.service: Deactivated successfully. Oct 8 19:38:26.359949 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:38:26.363520 systemd-logind[2113]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:38:26.365554 systemd-logind[2113]: Removed session 15. Oct 8 19:38:31.381486 systemd[1]: Started sshd@15-172.31.17.52:22-139.178.68.195:59516.service - OpenSSH per-connection server daemon (139.178.68.195:59516). Oct 8 19:38:31.589274 sshd[5979]: Accepted publickey for core from 139.178.68.195 port 59516 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:31.594845 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:31.607429 systemd-logind[2113]: New session 16 of user core. Oct 8 19:38:31.616213 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:38:31.946622 sshd[5979]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:31.962279 systemd[1]: sshd@15-172.31.17.52:22-139.178.68.195:59516.service: Deactivated successfully. Oct 8 19:38:31.970757 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:38:31.976110 systemd-logind[2113]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:38:31.980886 systemd-logind[2113]: Removed session 16. Oct 8 19:38:36.986282 systemd[1]: Started sshd@16-172.31.17.52:22-139.178.68.195:59518.service - OpenSSH per-connection server daemon (139.178.68.195:59518). Oct 8 19:38:37.180238 sshd[6004]: Accepted publickey for core from 139.178.68.195 port 59518 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:37.182351 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:37.202255 systemd-logind[2113]: New session 17 of user core. Oct 8 19:38:37.215678 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:38:37.628893 sshd[6004]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:37.649540 systemd-logind[2113]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:38:37.652292 systemd[1]: sshd@16-172.31.17.52:22-139.178.68.195:59518.service: Deactivated successfully. Oct 8 19:38:37.668409 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:38:37.694614 systemd[1]: Started sshd@17-172.31.17.52:22-139.178.68.195:59520.service - OpenSSH per-connection server daemon (139.178.68.195:59520). Oct 8 19:38:37.701169 systemd-logind[2113]: Removed session 17. Oct 8 19:38:37.902560 sshd[6018]: Accepted publickey for core from 139.178.68.195 port 59520 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:37.905327 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:37.913795 systemd-logind[2113]: New session 18 of user core. Oct 8 19:38:37.919532 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:38:38.592865 sshd[6018]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:38.609627 systemd[1]: sshd@17-172.31.17.52:22-139.178.68.195:59520.service: Deactivated successfully. Oct 8 19:38:38.625583 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:38:38.628529 systemd-logind[2113]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:38:38.651570 systemd[1]: Started sshd@18-172.31.17.52:22-139.178.68.195:59530.service - OpenSSH per-connection server daemon (139.178.68.195:59530). Oct 8 19:38:38.656727 systemd-logind[2113]: Removed session 18. Oct 8 19:38:38.702067 kubelet[3635]: I1008 19:38:38.695974 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7rjnd" podStartSLOduration=65.773217346 podStartE2EDuration="1m9.695900851s" podCreationTimestamp="2024-10-08 19:37:29 +0000 UTC" firstStartedPulling="2024-10-08 19:37:58.341568846 +0000 UTC m=+51.875426274" lastFinishedPulling="2024-10-08 19:38:02.264252351 +0000 UTC m=+55.798109779" observedRunningTime="2024-10-08 19:38:03.48756586 +0000 UTC m=+57.021423348" watchObservedRunningTime="2024-10-08 19:38:38.695900851 +0000 UTC m=+92.229758279" Oct 8 19:38:38.702067 kubelet[3635]: I1008 19:38:38.696376 3635 topology_manager.go:215] "Topology Admit Handler" podUID="713ffd0d-5602-40e6-a317-dce949f0e94d" podNamespace="calico-apiserver" podName="calico-apiserver-7b9bd6f945-lgrtx" Oct 8 19:38:38.792156 kubelet[3635]: I1008 19:38:38.792089 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzr9s\" (UniqueName: \"kubernetes.io/projected/713ffd0d-5602-40e6-a317-dce949f0e94d-kube-api-access-qzr9s\") pod \"calico-apiserver-7b9bd6f945-lgrtx\" (UID: \"713ffd0d-5602-40e6-a317-dce949f0e94d\") " pod="calico-apiserver/calico-apiserver-7b9bd6f945-lgrtx" Oct 8 19:38:38.793367 kubelet[3635]: I1008 19:38:38.793248 3635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/713ffd0d-5602-40e6-a317-dce949f0e94d-calico-apiserver-certs\") pod \"calico-apiserver-7b9bd6f945-lgrtx\" (UID: \"713ffd0d-5602-40e6-a317-dce949f0e94d\") " pod="calico-apiserver/calico-apiserver-7b9bd6f945-lgrtx" Oct 8 19:38:38.896461 kubelet[3635]: E1008 19:38:38.895375 3635 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:38:38.896461 kubelet[3635]: E1008 19:38:38.895480 3635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713ffd0d-5602-40e6-a317-dce949f0e94d-calico-apiserver-certs podName:713ffd0d-5602-40e6-a317-dce949f0e94d nodeName:}" failed. No retries permitted until 2024-10-08 19:38:39.395453569 +0000 UTC m=+92.929310985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/713ffd0d-5602-40e6-a317-dce949f0e94d-calico-apiserver-certs") pod "calico-apiserver-7b9bd6f945-lgrtx" (UID: "713ffd0d-5602-40e6-a317-dce949f0e94d") : secret "calico-apiserver-certs" not found Oct 8 19:38:38.909099 sshd[6032]: Accepted publickey for core from 139.178.68.195 port 59530 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:38.915625 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:38.962497 systemd-logind[2113]: New session 19 of user core. Oct 8 19:38:38.974973 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:38:39.666551 containerd[2143]: time="2024-10-08T19:38:39.666185433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9bd6f945-lgrtx,Uid:713ffd0d-5602-40e6-a317-dce949f0e94d,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:38:40.439885 systemd-networkd[1685]: cali4fd335e245e: Link UP Oct 8 19:38:40.447501 systemd-networkd[1685]: cali4fd335e245e: Gained carrier Oct 8 19:38:40.466643 (udev-worker)[6071]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.005 [INFO][6053] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0 calico-apiserver-7b9bd6f945- calico-apiserver 713ffd0d-5602-40e6-a317-dce949f0e94d 1051 0 2024-10-08 19:38:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b9bd6f945 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-52 calico-apiserver-7b9bd6f945-lgrtx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4fd335e245e [] []}} ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.006 [INFO][6053] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.223 [INFO][6064] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" HandleID="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Workload="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.276 [INFO][6064] ipam_plugin.go 270: Auto assigning IP ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" HandleID="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Workload="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039f0b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-52", "pod":"calico-apiserver-7b9bd6f945-lgrtx", "timestamp":"2024-10-08 19:38:40.22329287 +0000 UTC"}, Hostname:"ip-172-31-17-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.276 [INFO][6064] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.276 [INFO][6064] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.276 [INFO][6064] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-52' Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.299 [INFO][6064] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.323 [INFO][6064] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.334 [INFO][6064] ipam.go 489: Trying affinity for 192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.339 [INFO][6064] ipam.go 155: Attempting to load block cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.345 [INFO][6064] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.345 [INFO][6064] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.351 [INFO][6064] ipam.go 1685: Creating new handle: k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71 Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.361 [INFO][6064] ipam.go 1203: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.414 [INFO][6064] ipam.go 1216: Successfully claimed IPs: [192.168.107.133/26] block=192.168.107.128/26 handle="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.414 [INFO][6064] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.133/26] handle="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" host="ip-172-31-17-52" Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.414 [INFO][6064] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:38:40.501313 containerd[2143]: 2024-10-08 19:38:40.414 [INFO][6064] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.107.133/26] IPv6=[] ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" HandleID="k8s-pod-network.0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Workload="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.504742 containerd[2143]: 2024-10-08 19:38:40.425 [INFO][6053] k8s.go 386: Populated endpoint ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0", GenerateName:"calico-apiserver-7b9bd6f945-", Namespace:"calico-apiserver", SelfLink:"", UID:"713ffd0d-5602-40e6-a317-dce949f0e94d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 38, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9bd6f945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"", Pod:"calico-apiserver-7b9bd6f945-lgrtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fd335e245e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:40.504742 containerd[2143]: 2024-10-08 19:38:40.425 [INFO][6053] k8s.go 387: Calico CNI using IPs: [192.168.107.133/32] ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.504742 containerd[2143]: 2024-10-08 19:38:40.426 [INFO][6053] dataplane_linux.go 68: Setting the host side veth name to cali4fd335e245e ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.504742 containerd[2143]: 2024-10-08 19:38:40.442 [INFO][6053] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.504742 containerd[2143]: 2024-10-08 19:38:40.442 [INFO][6053] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0", GenerateName:"calico-apiserver-7b9bd6f945-", Namespace:"calico-apiserver", SelfLink:"", UID:"713ffd0d-5602-40e6-a317-dce949f0e94d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 38, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9bd6f945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-52", ContainerID:"0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71", Pod:"calico-apiserver-7b9bd6f945-lgrtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fd335e245e", MAC:"ca:da:a2:5e:9f:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:38:40.504742 containerd[2143]: 2024-10-08 19:38:40.486 [INFO][6053] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71" Namespace="calico-apiserver" Pod="calico-apiserver-7b9bd6f945-lgrtx" WorkloadEndpoint="ip--172--31--17--52-k8s-calico--apiserver--7b9bd6f945--lgrtx-eth0" Oct 8 19:38:40.576842 containerd[2143]: time="2024-10-08T19:38:40.576655073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:38:40.576842 containerd[2143]: time="2024-10-08T19:38:40.576783613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:38:40.579463 containerd[2143]: time="2024-10-08T19:38:40.577288800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:38:40.579463 containerd[2143]: time="2024-10-08T19:38:40.578291199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:38:40.877983 containerd[2143]: time="2024-10-08T19:38:40.877827089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9bd6f945-lgrtx,Uid:713ffd0d-5602-40e6-a317-dce949f0e94d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71\"" Oct 8 19:38:40.885520 containerd[2143]: time="2024-10-08T19:38:40.885153911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:38:41.966411 systemd-networkd[1685]: cali4fd335e245e: Gained IPv6LL Oct 8 19:38:43.198579 sshd[6032]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:43.240233 systemd[1]: Started sshd@19-172.31.17.52:22-139.178.68.195:33888.service - OpenSSH per-connection server daemon (139.178.68.195:33888). Oct 8 19:38:43.241255 systemd[1]: sshd@18-172.31.17.52:22-139.178.68.195:59530.service: Deactivated successfully. Oct 8 19:38:43.264146 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:38:43.276813 systemd-logind[2113]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:38:43.289988 systemd-logind[2113]: Removed session 19. Oct 8 19:38:43.506490 sshd[6153]: Accepted publickey for core from 139.178.68.195 port 33888 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:43.510258 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:43.525422 systemd-logind[2113]: New session 20 of user core. Oct 8 19:38:43.531396 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:38:44.377492 sshd[6153]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:44.400759 systemd[1]: sshd@19-172.31.17.52:22-139.178.68.195:33888.service: Deactivated successfully. Oct 8 19:38:44.413352 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:38:44.425986 systemd-logind[2113]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:38:44.433830 systemd[1]: Started sshd@20-172.31.17.52:22-139.178.68.195:33904.service - OpenSSH per-connection server daemon (139.178.68.195:33904). Oct 8 19:38:44.439972 systemd-logind[2113]: Removed session 20. Oct 8 19:38:44.544937 containerd[2143]: time="2024-10-08T19:38:44.544812654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:44.548932 containerd[2143]: time="2024-10-08T19:38:44.548865126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:38:44.550492 containerd[2143]: time="2024-10-08T19:38:44.550368130Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:44.558100 containerd[2143]: time="2024-10-08T19:38:44.556985472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:38:44.559521 containerd[2143]: time="2024-10-08T19:38:44.558994683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.673765318s" Oct 8 19:38:44.559521 containerd[2143]: time="2024-10-08T19:38:44.559140398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:38:44.563568 containerd[2143]: time="2024-10-08T19:38:44.563362406Z" level=info msg="CreateContainer within sandbox \"0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:38:44.589707 containerd[2143]: time="2024-10-08T19:38:44.588758790Z" level=info msg="CreateContainer within sandbox \"0f7a0cb876e1989a863d3e0705e79081b6b0c504af207c50e392eda490384d71\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3c9fcff696f2ff0ad8d5cf192488940a395d5f06a0c79840e1c0f9413f330a4\"" Oct 8 19:38:44.593606 containerd[2143]: time="2024-10-08T19:38:44.591913571Z" level=info msg="StartContainer for \"b3c9fcff696f2ff0ad8d5cf192488940a395d5f06a0c79840e1c0f9413f330a4\"" Oct 8 19:38:44.624765 ntpd[2094]: Listen normally on 12 cali4fd335e245e [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:38:44.625940 ntpd[2094]: 8 Oct 19:38:44 ntpd[2094]: Listen normally on 12 cali4fd335e245e [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:38:44.647198 sshd[6173]: Accepted publickey for core from 139.178.68.195 port 33904 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:44.655352 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:44.681633 systemd-logind[2113]: New session 21 of user core. Oct 8 19:38:44.684676 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:38:44.755894 containerd[2143]: time="2024-10-08T19:38:44.755816332Z" level=info msg="StartContainer for \"b3c9fcff696f2ff0ad8d5cf192488940a395d5f06a0c79840e1c0f9413f330a4\" returns successfully" Oct 8 19:38:44.990940 sshd[6173]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:45.000885 systemd[1]: sshd@20-172.31.17.52:22-139.178.68.195:33904.service: Deactivated successfully. Oct 8 19:38:45.012521 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:38:45.017002 systemd-logind[2113]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:38:45.019559 systemd-logind[2113]: Removed session 21. Oct 8 19:38:45.674615 kubelet[3635]: I1008 19:38:45.674265 3635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b9bd6f945-lgrtx" podStartSLOduration=3.995807814 podStartE2EDuration="7.672009904s" podCreationTimestamp="2024-10-08 19:38:38 +0000 UTC" firstStartedPulling="2024-10-08 19:38:40.883799849 +0000 UTC m=+94.417657277" lastFinishedPulling="2024-10-08 19:38:44.560001927 +0000 UTC m=+98.093859367" observedRunningTime="2024-10-08 19:38:45.67046738 +0000 UTC m=+99.204324808" watchObservedRunningTime="2024-10-08 19:38:45.672009904 +0000 UTC m=+99.205867332" Oct 8 19:38:50.021580 systemd[1]: Started sshd@21-172.31.17.52:22-139.178.68.195:33920.service - OpenSSH per-connection server daemon (139.178.68.195:33920). Oct 8 19:38:50.198920 sshd[6239]: Accepted publickey for core from 139.178.68.195 port 33920 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:50.202085 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:50.209695 systemd-logind[2113]: New session 22 of user core. Oct 8 19:38:50.217538 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:38:50.470765 sshd[6239]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:50.478177 systemd-logind[2113]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:38:50.479356 systemd[1]: sshd@21-172.31.17.52:22-139.178.68.195:33920.service: Deactivated successfully. Oct 8 19:38:50.486263 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:38:50.488210 systemd-logind[2113]: Removed session 22. Oct 8 19:38:55.500773 systemd[1]: Started sshd@22-172.31.17.52:22-139.178.68.195:46114.service - OpenSSH per-connection server daemon (139.178.68.195:46114). Oct 8 19:38:55.675934 sshd[6288]: Accepted publickey for core from 139.178.68.195 port 46114 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:55.679859 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:55.689571 systemd-logind[2113]: New session 23 of user core. Oct 8 19:38:55.695645 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:38:55.942397 sshd[6288]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:55.953988 systemd[1]: sshd@22-172.31.17.52:22-139.178.68.195:46114.service: Deactivated successfully. Oct 8 19:38:55.964090 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:38:55.966285 systemd-logind[2113]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:38:55.968679 systemd-logind[2113]: Removed session 23. Oct 8 19:39:00.976493 systemd[1]: Started sshd@23-172.31.17.52:22-139.178.68.195:49754.service - OpenSSH per-connection server daemon (139.178.68.195:49754). Oct 8 19:39:01.172704 sshd[6304]: Accepted publickey for core from 139.178.68.195 port 49754 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:39:01.178385 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:39:01.200659 systemd-logind[2113]: New session 24 of user core. Oct 8 19:39:01.208060 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:39:01.527410 sshd[6304]: pam_unix(sshd:session): session closed for user core Oct 8 19:39:01.540284 systemd[1]: sshd@23-172.31.17.52:22-139.178.68.195:49754.service: Deactivated successfully. Oct 8 19:39:01.546582 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:39:01.551124 systemd-logind[2113]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:39:01.553607 systemd-logind[2113]: Removed session 24. Oct 8 19:39:06.561640 systemd[1]: Started sshd@24-172.31.17.52:22-139.178.68.195:49758.service - OpenSSH per-connection server daemon (139.178.68.195:49758). Oct 8 19:39:06.755924 sshd[6326]: Accepted publickey for core from 139.178.68.195 port 49758 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:39:06.758883 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:39:06.769313 systemd-logind[2113]: New session 25 of user core. Oct 8 19:39:06.772761 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:39:07.024444 sshd[6326]: pam_unix(sshd:session): session closed for user core Oct 8 19:39:07.032635 systemd[1]: sshd@24-172.31.17.52:22-139.178.68.195:49758.service: Deactivated successfully. Oct 8 19:39:07.040362 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:39:07.042450 systemd-logind[2113]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:39:07.044998 systemd-logind[2113]: Removed session 25. Oct 8 19:39:12.061515 systemd[1]: Started sshd@25-172.31.17.52:22-139.178.68.195:46032.service - OpenSSH per-connection server daemon (139.178.68.195:46032). Oct 8 19:39:12.238156 sshd[6367]: Accepted publickey for core from 139.178.68.195 port 46032 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:39:12.241476 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:39:12.250336 systemd-logind[2113]: New session 26 of user core. Oct 8 19:39:12.259643 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:39:12.569498 sshd[6367]: pam_unix(sshd:session): session closed for user core Oct 8 19:39:12.581436 systemd[1]: sshd@25-172.31.17.52:22-139.178.68.195:46032.service: Deactivated successfully. Oct 8 19:39:12.590545 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:39:12.593949 systemd-logind[2113]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:39:12.598746 systemd-logind[2113]: Removed session 26. Oct 8 19:39:17.600784 systemd[1]: Started sshd@26-172.31.17.52:22-139.178.68.195:46048.service - OpenSSH per-connection server daemon (139.178.68.195:46048). Oct 8 19:39:17.791820 sshd[6404]: Accepted publickey for core from 139.178.68.195 port 46048 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:39:17.794827 sshd[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:39:17.804410 systemd-logind[2113]: New session 27 of user core. Oct 8 19:39:17.815541 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:39:18.071142 sshd[6404]: pam_unix(sshd:session): session closed for user core Oct 8 19:39:18.078563 systemd-logind[2113]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:39:18.084864 systemd[1]: sshd@26-172.31.17.52:22-139.178.68.195:46048.service: Deactivated successfully. Oct 8 19:39:18.094422 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:39:18.096874 systemd-logind[2113]: Removed session 27. Oct 8 19:39:23.102615 systemd[1]: Started sshd@27-172.31.17.52:22-139.178.68.195:35954.service - OpenSSH per-connection server daemon (139.178.68.195:35954). Oct 8 19:39:23.291597 sshd[6444]: Accepted publickey for core from 139.178.68.195 port 35954 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:39:23.294268 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:39:23.306787 systemd-logind[2113]: New session 28 of user core. Oct 8 19:39:23.314851 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:39:23.564134 sshd[6444]: pam_unix(sshd:session): session closed for user core Oct 8 19:39:23.570200 systemd-logind[2113]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:39:23.572831 systemd[1]: sshd@27-172.31.17.52:22-139.178.68.195:35954.service: Deactivated successfully. Oct 8 19:39:23.578154 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:39:23.579888 systemd-logind[2113]: Removed session 28. Oct 8 19:39:48.716007 containerd[2143]: time="2024-10-08T19:39:48.714986619Z" level=info msg="shim disconnected" id=725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a namespace=k8s.io Oct 8 19:39:48.717695 containerd[2143]: time="2024-10-08T19:39:48.716006475Z" level=warning msg="cleaning up after shim disconnected" id=725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a namespace=k8s.io Oct 8 19:39:48.717695 containerd[2143]: time="2024-10-08T19:39:48.716085975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:39:48.721522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a-rootfs.mount: Deactivated successfully. Oct 8 19:39:48.848288 kubelet[3635]: I1008 19:39:48.848085 3635 scope.go:117] "RemoveContainer" containerID="725ec0cf4e1b4ef8f4956734c9fbb3b04dfb25f3fcb8e124620bcade54c86f4a" Oct 8 19:39:48.853188 containerd[2143]: time="2024-10-08T19:39:48.853121631Z" level=info msg="CreateContainer within sandbox \"838c89eb35b0228d3caf99a338d167bea91925f613f0f110cb440c9e02c2fd3b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 19:39:48.878056 containerd[2143]: time="2024-10-08T19:39:48.877151955Z" level=info msg="CreateContainer within sandbox \"838c89eb35b0228d3caf99a338d167bea91925f613f0f110cb440c9e02c2fd3b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1ee65e7c7be89a9331447018516707b1d95e7013c4474b439a665ddbe3400cfc\"" Oct 8 19:39:48.879532 containerd[2143]: time="2024-10-08T19:39:48.879473199Z" level=info msg="StartContainer for \"1ee65e7c7be89a9331447018516707b1d95e7013c4474b439a665ddbe3400cfc\"" Oct 8 19:39:48.988112 containerd[2143]: time="2024-10-08T19:39:48.986548420Z" level=info msg="StartContainer for \"1ee65e7c7be89a9331447018516707b1d95e7013c4474b439a665ddbe3400cfc\" returns successfully" Oct 8 19:39:49.124168 containerd[2143]: time="2024-10-08T19:39:49.123969481Z" level=info msg="shim disconnected" id=353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696 namespace=k8s.io Oct 8 19:39:49.124168 containerd[2143]: time="2024-10-08T19:39:49.124119193Z" level=warning msg="cleaning up after shim disconnected" id=353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696 namespace=k8s.io Oct 8 19:39:49.124168 containerd[2143]: time="2024-10-08T19:39:49.124142365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:39:49.716844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696-rootfs.mount: Deactivated successfully. Oct 8 19:39:49.856812 kubelet[3635]: I1008 19:39:49.856542 3635 scope.go:117] "RemoveContainer" containerID="353e482c6c40b46f2e29cb3cc300376a799968b46b7fd864b09080812459b696" Oct 8 19:39:49.863571 containerd[2143]: time="2024-10-08T19:39:49.863466952Z" level=info msg="CreateContainer within sandbox \"b90122a56610340c936e9dac29f2ef05e1390e7215b4071c6ec00e0a3a60eb0b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 19:39:49.893146 containerd[2143]: time="2024-10-08T19:39:49.892556572Z" level=info msg="CreateContainer within sandbox \"b90122a56610340c936e9dac29f2ef05e1390e7215b4071c6ec00e0a3a60eb0b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"703803acd80f2e5b433a39511a8ca88385be1efab99d3f76ca8f70dc27eab396\"" Oct 8 19:39:49.895648 containerd[2143]: time="2024-10-08T19:39:49.894774796Z" level=info msg="StartContainer for \"703803acd80f2e5b433a39511a8ca88385be1efab99d3f76ca8f70dc27eab396\"" Oct 8 19:39:50.019064 containerd[2143]: time="2024-10-08T19:39:50.018374209Z" level=info msg="StartContainer for \"703803acd80f2e5b433a39511a8ca88385be1efab99d3f76ca8f70dc27eab396\" returns successfully" Oct 8 19:39:50.100137 kubelet[3635]: E1008 19:39:50.100062 3635 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-52?timeout=10s\": context deadline exceeded" Oct 8 19:39:54.216119 containerd[2143]: time="2024-10-08T19:39:54.215884218Z" level=info msg="shim disconnected" id=985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed namespace=k8s.io Oct 8 19:39:54.216119 containerd[2143]: time="2024-10-08T19:39:54.216073434Z" level=warning msg="cleaning up after shim disconnected" id=985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed namespace=k8s.io Oct 8 19:39:54.216815 containerd[2143]: time="2024-10-08T19:39:54.216097158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:39:54.222979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed-rootfs.mount: Deactivated successfully. Oct 8 19:39:54.877898 kubelet[3635]: I1008 19:39:54.877611 3635 scope.go:117] "RemoveContainer" containerID="985439acf2d3737ba7f2c641dc8305467ac7c628b5116ebb3d94b07af655e3ed" Oct 8 19:39:54.881557 containerd[2143]: time="2024-10-08T19:39:54.881505837Z" level=info msg="CreateContainer within sandbox \"ab05dfe3f4fe83431eb6e64ed55cdd437b1283a5819ecd57b97a3216ae665a63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 8 19:39:54.907333 containerd[2143]: time="2024-10-08T19:39:54.907251537Z" level=info msg="CreateContainer within sandbox \"ab05dfe3f4fe83431eb6e64ed55cdd437b1283a5819ecd57b97a3216ae665a63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d8592d6513c3b3ac83f4f50c9f2a172ce058410daf3f08caaab9069cc494a248\"" Oct 8 19:39:54.907974 containerd[2143]: time="2024-10-08T19:39:54.907935165Z" level=info msg="StartContainer for \"d8592d6513c3b3ac83f4f50c9f2a172ce058410daf3f08caaab9069cc494a248\"" Oct 8 19:39:55.035767 containerd[2143]: time="2024-10-08T19:39:55.035684622Z" level=info msg="StartContainer for \"d8592d6513c3b3ac83f4f50c9f2a172ce058410daf3f08caaab9069cc494a248\" returns successfully" Oct 8 19:40:00.101535 kubelet[3635]: E1008 19:40:00.101447 3635 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-52?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"