Sep 4 17:10:39.210374 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:10:39.210422 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:10:39.210448 kernel: KASLR disabled due to lack of seed Sep 4 17:10:39.210464 kernel: efi: EFI v2.7 by EDK II Sep 4 17:10:39.210480 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:10:39.210496 kernel: ACPI: Early table checksum verification disabled Sep 4 17:10:39.210514 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:10:39.210530 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:10:39.210546 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:10:39.210562 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:10:39.210583 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:10:39.210599 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:10:39.210615 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:10:39.210631 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:10:39.210650 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:10:39.210670 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:10:39.210688 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:10:39.210704 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:10:39.210721 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:10:39.210738 kernel: printk: bootconsole [uart0] enabled Sep 4 17:10:39.210754 kernel: NUMA: Failed to initialise from firmware Sep 4 17:10:39.210771 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:10:39.210788 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:10:39.210805 kernel: Zone ranges: Sep 4 17:10:39.210821 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:10:39.210838 kernel: DMA32 empty Sep 4 17:10:39.210859 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:10:39.210875 kernel: Movable zone start for each node Sep 4 17:10:39.210892 kernel: Early memory node ranges Sep 4 17:10:39.210909 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:10:39.210925 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:10:39.210942 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:10:39.210959 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:10:39.210976 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:10:39.210993 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:10:39.211013 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:10:39.211031 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:10:39.211047 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:10:39.211069 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:10:39.211086 kernel: psci: probing for conduit method from ACPI. Sep 4 17:10:39.211131 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:10:39.211151 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:10:39.211170 kernel: psci: Trusted OS migration not required Sep 4 17:10:39.211192 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:10:39.211210 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:10:39.211228 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:10:39.211246 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:10:39.211264 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:10:39.211281 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:10:39.212208 kernel: CPU features: detected: Spectre-v2 Sep 4 17:10:39.212238 kernel: CPU features: detected: Spectre-v3a Sep 4 17:10:39.212256 kernel: CPU features: detected: Spectre-BHB Sep 4 17:10:39.212273 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:10:39.212291 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:10:39.212317 kernel: alternatives: applying boot alternatives Sep 4 17:10:39.212337 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:10:39.212357 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:10:39.212375 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:10:39.212392 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:10:39.212410 kernel: Fallback order for Node 0: 0 Sep 4 17:10:39.212428 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:10:39.212446 kernel: Policy zone: Normal Sep 4 17:10:39.212463 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:10:39.212481 kernel: software IO TLB: area num 2. Sep 4 17:10:39.212498 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:10:39.212521 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Sep 4 17:10:39.212539 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:10:39.212557 kernel: trace event string verifier disabled Sep 4 17:10:39.212575 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:10:39.212593 kernel: rcu: RCU event tracing is enabled. Sep 4 17:10:39.212611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:10:39.212629 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:10:39.212647 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:10:39.212665 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:10:39.212682 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:10:39.212700 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:10:39.212722 kernel: GICv3: 96 SPIs implemented Sep 4 17:10:39.212740 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:10:39.212757 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:10:39.212775 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:10:39.212792 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:10:39.212809 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:10:39.212827 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:10:39.212845 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:10:39.212863 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:10:39.212880 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:10:39.212898 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:10:39.212916 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:10:39.212938 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:10:39.212956 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:10:39.212973 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:10:39.212991 kernel: Console: colour dummy device 80x25 Sep 4 17:10:39.213009 kernel: printk: console [tty1] enabled Sep 4 17:10:39.213027 kernel: ACPI: Core revision 20230628 Sep 4 17:10:39.213045 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:10:39.213063 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:10:39.213081 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:10:39.215950 kernel: SELinux: Initializing. Sep 4 17:10:39.215992 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:10:39.216011 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:10:39.216034 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:10:39.216053 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:10:39.216071 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:10:39.216090 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:10:39.216242 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:10:39.216266 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:10:39.216285 kernel: Remapping and enabling EFI services. Sep 4 17:10:39.216309 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:10:39.216328 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:10:39.216346 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:10:39.216364 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:10:39.216382 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:10:39.216400 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:10:39.216418 kernel: SMP: Total of 2 processors activated. Sep 4 17:10:39.216436 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:10:39.216454 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:10:39.216476 kernel: CPU features: detected: CRC32 instructions Sep 4 17:10:39.216494 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:10:39.216523 kernel: alternatives: applying system-wide alternatives Sep 4 17:10:39.216547 kernel: devtmpfs: initialized Sep 4 17:10:39.216566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:10:39.216584 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:10:39.216603 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:10:39.216622 kernel: SMBIOS 3.0.0 present. Sep 4 17:10:39.216641 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:10:39.216664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:10:39.216683 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:10:39.216702 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:10:39.216721 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:10:39.216739 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:10:39.216758 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Sep 4 17:10:39.216777 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:10:39.216800 kernel: cpuidle: using governor menu Sep 4 17:10:39.216819 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:10:39.216837 kernel: ASID allocator initialised with 65536 entries Sep 4 17:10:39.216856 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:10:39.216874 kernel: Serial: AMBA PL011 UART driver Sep 4 17:10:39.216893 kernel: Modules: 17600 pages in range for non-PLT usage Sep 4 17:10:39.216912 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:10:39.216931 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:10:39.216949 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:10:39.216972 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:10:39.216991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:10:39.217010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:10:39.217028 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:10:39.217047 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:10:39.217066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:10:39.217085 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:10:39.217144 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:10:39.217165 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:10:39.217189 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:10:39.217208 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:10:39.217227 kernel: ACPI: Interpreter enabled Sep 4 17:10:39.217245 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:10:39.217264 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:10:39.217282 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:10:39.217571 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:10:39.217811 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:10:39.218018 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:10:39.218272 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:10:39.218481 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:10:39.218508 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:10:39.218527 kernel: acpiphp: Slot [1] registered Sep 4 17:10:39.218546 kernel: acpiphp: Slot [2] registered Sep 4 17:10:39.218565 kernel: acpiphp: Slot [3] registered Sep 4 17:10:39.218584 kernel: acpiphp: Slot [4] registered Sep 4 17:10:39.218603 kernel: acpiphp: Slot [5] registered Sep 4 17:10:39.218632 kernel: acpiphp: Slot [6] registered Sep 4 17:10:39.218651 kernel: acpiphp: Slot [7] registered Sep 4 17:10:39.218670 kernel: acpiphp: Slot [8] registered Sep 4 17:10:39.218689 kernel: acpiphp: Slot [9] registered Sep 4 17:10:39.218708 kernel: acpiphp: Slot [10] registered Sep 4 17:10:39.218727 kernel: acpiphp: Slot [11] registered Sep 4 17:10:39.218748 kernel: acpiphp: Slot [12] registered Sep 4 17:10:39.218782 kernel: acpiphp: Slot [13] registered Sep 4 17:10:39.218829 kernel: acpiphp: Slot [14] registered Sep 4 17:10:39.218892 kernel: acpiphp: Slot [15] registered Sep 4 17:10:39.218917 kernel: acpiphp: Slot [16] registered Sep 4 17:10:39.218941 kernel: acpiphp: Slot [17] registered Sep 4 17:10:39.218966 kernel: acpiphp: Slot [18] registered Sep 4 17:10:39.218985 kernel: acpiphp: Slot [19] registered Sep 4 17:10:39.219004 kernel: acpiphp: Slot [20] registered Sep 4 17:10:39.219022 kernel: acpiphp: Slot [21] registered Sep 4 17:10:39.219041 kernel: acpiphp: Slot [22] registered Sep 4 17:10:39.219060 kernel: acpiphp: Slot [23] registered Sep 4 17:10:39.219079 kernel: acpiphp: Slot [24] registered Sep 4 17:10:39.219134 kernel: acpiphp: Slot [25] registered Sep 4 17:10:39.219182 kernel: acpiphp: Slot [26] registered Sep 4 17:10:39.219201 kernel: acpiphp: Slot [27] registered Sep 4 17:10:39.219220 kernel: acpiphp: Slot [28] registered Sep 4 17:10:39.219239 kernel: acpiphp: Slot [29] registered Sep 4 17:10:39.219257 kernel: acpiphp: Slot [30] registered Sep 4 17:10:39.219276 kernel: acpiphp: Slot [31] registered Sep 4 17:10:39.219295 kernel: PCI host bridge to bus 0000:00 Sep 4 17:10:39.219526 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:10:39.219721 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:10:39.219903 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:10:39.220084 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:10:39.221993 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:10:39.222254 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:10:39.222461 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:10:39.222681 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:10:39.222886 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:10:39.223086 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:10:39.225452 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:10:39.225659 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:10:39.225880 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:10:39.226077 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:10:39.226357 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:10:39.226657 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:10:39.226868 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:10:39.227070 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:10:39.228419 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:10:39.228641 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:10:39.228834 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:10:39.229025 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:10:39.229264 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:10:39.229294 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:10:39.229314 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:10:39.229334 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:10:39.229354 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:10:39.229372 kernel: iommu: Default domain type: Translated Sep 4 17:10:39.229392 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:10:39.229418 kernel: efivars: Registered efivars operations Sep 4 17:10:39.229437 kernel: vgaarb: loaded Sep 4 17:10:39.229456 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:10:39.229475 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:10:39.229493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:10:39.229512 kernel: pnp: PnP ACPI init Sep 4 17:10:39.229753 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:10:39.229784 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:10:39.229810 kernel: NET: Registered PF_INET protocol family Sep 4 17:10:39.229830 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:10:39.229849 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:10:39.229869 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:10:39.229888 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:10:39.229907 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:10:39.229926 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:10:39.229945 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:10:39.229964 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:10:39.229987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:10:39.230006 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:10:39.230025 kernel: kvm [1]: HYP mode not available Sep 4 17:10:39.230044 kernel: Initialise system trusted keyrings Sep 4 17:10:39.230064 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:10:39.230083 kernel: Key type asymmetric registered Sep 4 17:10:39.233191 kernel: Asymmetric key parser 'x509' registered Sep 4 17:10:39.233221 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:10:39.233242 kernel: io scheduler mq-deadline registered Sep 4 17:10:39.233270 kernel: io scheduler kyber registered Sep 4 17:10:39.233289 kernel: io scheduler bfq registered Sep 4 17:10:39.233545 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:10:39.233574 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:10:39.233594 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:10:39.233613 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:10:39.233632 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:10:39.233651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:10:39.233676 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:10:39.233902 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:10:39.233931 kernel: printk: console [ttyS0] disabled Sep 4 17:10:39.233951 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:10:39.233970 kernel: printk: console [ttyS0] enabled Sep 4 17:10:39.233989 kernel: printk: bootconsole [uart0] disabled Sep 4 17:10:39.234008 kernel: thunder_xcv, ver 1.0 Sep 4 17:10:39.234027 kernel: thunder_bgx, ver 1.0 Sep 4 17:10:39.234045 kernel: nicpf, ver 1.0 Sep 4 17:10:39.234064 kernel: nicvf, ver 1.0 Sep 4 17:10:39.234306 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:10:39.234498 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:10:38 UTC (1725469838) Sep 4 17:10:39.234524 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:10:39.234544 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:10:39.234562 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:10:39.234581 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:10:39.234600 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:10:39.234619 kernel: Segment Routing with IPv6 Sep 4 17:10:39.234644 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:10:39.234662 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:10:39.234681 kernel: Key type dns_resolver registered Sep 4 17:10:39.234700 kernel: registered taskstats version 1 Sep 4 17:10:39.234718 kernel: Loading compiled-in X.509 certificates Sep 4 17:10:39.234737 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:10:39.234756 kernel: Key type .fscrypt registered Sep 4 17:10:39.234774 kernel: Key type fscrypt-provisioning registered Sep 4 17:10:39.234792 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:10:39.234815 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:10:39.234835 kernel: ima: No architecture policies found Sep 4 17:10:39.234853 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:10:39.234872 kernel: clk: Disabling unused clocks Sep 4 17:10:39.234891 kernel: Freeing unused kernel memory: 39040K Sep 4 17:10:39.234909 kernel: Run /init as init process Sep 4 17:10:39.234928 kernel: with arguments: Sep 4 17:10:39.234946 kernel: /init Sep 4 17:10:39.234964 kernel: with environment: Sep 4 17:10:39.234986 kernel: HOME=/ Sep 4 17:10:39.235005 kernel: TERM=linux Sep 4 17:10:39.235023 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:10:39.235046 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:10:39.235069 systemd[1]: Detected virtualization amazon. Sep 4 17:10:39.235089 systemd[1]: Detected architecture arm64. Sep 4 17:10:39.237890 systemd[1]: Running in initrd. Sep 4 17:10:39.237913 systemd[1]: No hostname configured, using default hostname. Sep 4 17:10:39.237945 systemd[1]: Hostname set to . Sep 4 17:10:39.237967 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:10:39.237987 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:10:39.238008 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:10:39.238029 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:10:39.238052 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:10:39.238073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:10:39.238144 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:10:39.238171 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:10:39.238195 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:10:39.238217 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:10:39.238238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:10:39.238258 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:10:39.238279 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:10:39.238306 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:10:39.238327 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:10:39.238347 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:10:39.238368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:10:39.238389 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:10:39.238410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:10:39.238430 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:10:39.238450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:10:39.238471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:10:39.238497 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:10:39.238517 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:10:39.238538 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:10:39.238558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:10:39.238578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:10:39.238599 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:10:39.238619 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:10:39.238639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:10:39.238664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:10:39.238685 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:10:39.238705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:10:39.238726 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:10:39.238791 systemd-journald[251]: Collecting audit messages is disabled. Sep 4 17:10:39.238841 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:10:39.238863 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:10:39.238882 kernel: Bridge firewalling registered Sep 4 17:10:39.238903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:10:39.238927 systemd-journald[251]: Journal started Sep 4 17:10:39.238965 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e9bea513a2f3438b48baaf5dece4f) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:10:39.189976 systemd-modules-load[252]: Inserted module 'overlay' Sep 4 17:10:39.229172 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 4 17:10:39.251125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:10:39.255119 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:10:39.254352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:39.255942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:10:39.264357 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:10:39.268456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:10:39.278420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:10:39.327651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:39.338481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:10:39.347278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:10:39.364383 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:10:39.365318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:10:39.383554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:10:39.406712 dracut-cmdline[285]: dracut-dracut-053 Sep 4 17:10:39.416874 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:10:39.467620 systemd-resolved[287]: Positive Trust Anchors: Sep 4 17:10:39.467655 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:10:39.467717 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:10:39.583135 kernel: SCSI subsystem initialized Sep 4 17:10:39.590156 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:10:39.603133 kernel: iscsi: registered transport (tcp) Sep 4 17:10:39.626135 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:10:39.626205 kernel: QLogic iSCSI HBA Driver Sep 4 17:10:39.707127 kernel: random: crng init done Sep 4 17:10:39.707430 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 4 17:10:39.712897 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:10:39.721811 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:10:39.734866 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:10:39.745385 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:10:39.777409 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:10:39.777485 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:10:39.777512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:10:39.845139 kernel: raid6: neonx8 gen() 6678 MB/s Sep 4 17:10:39.862127 kernel: raid6: neonx4 gen() 6496 MB/s Sep 4 17:10:39.879127 kernel: raid6: neonx2 gen() 5431 MB/s Sep 4 17:10:39.896133 kernel: raid6: neonx1 gen() 3940 MB/s Sep 4 17:10:39.913127 kernel: raid6: int64x8 gen() 3808 MB/s Sep 4 17:10:39.930127 kernel: raid6: int64x4 gen() 3720 MB/s Sep 4 17:10:39.947126 kernel: raid6: int64x2 gen() 3597 MB/s Sep 4 17:10:39.964908 kernel: raid6: int64x1 gen() 2770 MB/s Sep 4 17:10:39.964940 kernel: raid6: using algorithm neonx8 gen() 6678 MB/s Sep 4 17:10:39.982904 kernel: raid6: .... xor() 4933 MB/s, rmw enabled Sep 4 17:10:39.982949 kernel: raid6: using neon recovery algorithm Sep 4 17:10:39.991136 kernel: xor: measuring software checksum speed Sep 4 17:10:39.991202 kernel: 8regs : 11030 MB/sec Sep 4 17:10:39.994126 kernel: 32regs : 11923 MB/sec Sep 4 17:10:39.996648 kernel: arm64_neon : 9602 MB/sec Sep 4 17:10:39.996683 kernel: xor: using function: 32regs (11923 MB/sec) Sep 4 17:10:40.082167 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:10:40.101403 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:10:40.114424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:10:40.148870 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 4 17:10:40.156633 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:10:40.178500 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:10:40.202921 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Sep 4 17:10:40.258697 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:10:40.274414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:10:40.389284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:10:40.410397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:10:40.457656 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:10:40.472232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:10:40.482488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:10:40.493396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:10:40.510474 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:10:40.561599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:10:40.574088 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:10:40.574177 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:10:40.582920 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:10:40.583294 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:10:40.615162 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a8:3b:cb:2e:b1 Sep 4 17:10:40.617849 (udev-worker)[534]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:10:40.631125 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:10:40.633120 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:10:40.634729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:10:40.653512 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:10:40.634985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:40.638618 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:10:40.663230 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:10:40.663269 kernel: GPT:9289727 != 16777215 Sep 4 17:10:40.663295 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:10:40.641230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:10:40.641560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:40.669991 kernel: GPT:9289727 != 16777215 Sep 4 17:10:40.670024 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:10:40.645419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:10:40.663722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:10:40.675684 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:40.706745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:40.720173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:10:40.763466 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:40.825133 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (518) Sep 4 17:10:40.833755 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:10:40.844125 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (528) Sep 4 17:10:40.938600 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:10:40.972485 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:10:40.986489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:10:40.986628 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:10:41.006474 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:10:41.023549 disk-uuid[659]: Primary Header is updated. Sep 4 17:10:41.023549 disk-uuid[659]: Secondary Entries is updated. Sep 4 17:10:41.023549 disk-uuid[659]: Secondary Header is updated. Sep 4 17:10:41.036127 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:41.046135 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:41.055122 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:42.055141 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:10:42.057546 disk-uuid[660]: The operation has completed successfully. Sep 4 17:10:42.222838 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:10:42.224176 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:10:42.283381 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:10:42.315208 sh[1004]: Success Sep 4 17:10:42.345156 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:10:42.453278 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:10:42.460179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:10:42.473288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:10:42.502052 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:10:42.502129 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:42.502159 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:10:42.504955 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:10:42.504989 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:10:42.594140 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:10:42.650606 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:10:42.654467 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:10:42.666486 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:10:42.671725 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:10:42.705992 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:42.706075 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:42.707467 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:10:42.712133 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:10:42.731119 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:10:42.736074 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:42.760261 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:10:42.773193 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:10:42.856788 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:10:42.873065 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:10:42.937290 systemd-networkd[1196]: lo: Link UP Sep 4 17:10:42.937311 systemd-networkd[1196]: lo: Gained carrier Sep 4 17:10:42.939735 systemd-networkd[1196]: Enumeration completed Sep 4 17:10:42.940456 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:10:42.940463 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:10:42.941734 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:10:42.945938 systemd[1]: Reached target network.target - Network. Sep 4 17:10:42.962847 systemd-networkd[1196]: eth0: Link UP Sep 4 17:10:42.962855 systemd-networkd[1196]: eth0: Gained carrier Sep 4 17:10:42.962873 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:10:42.999176 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.30.239/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:10:43.202771 ignition[1129]: Ignition 2.18.0 Sep 4 17:10:43.202801 ignition[1129]: Stage: fetch-offline Sep 4 17:10:43.204777 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:43.204802 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:43.211122 ignition[1129]: Ignition finished successfully Sep 4 17:10:43.215272 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:10:43.234543 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:10:43.259406 ignition[1207]: Ignition 2.18.0 Sep 4 17:10:43.259899 ignition[1207]: Stage: fetch Sep 4 17:10:43.260539 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:43.260564 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:43.260716 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:43.280549 ignition[1207]: PUT result: OK Sep 4 17:10:43.283530 ignition[1207]: parsed url from cmdline: "" Sep 4 17:10:43.283545 ignition[1207]: no config URL provided Sep 4 17:10:43.283561 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:10:43.283585 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:10:43.283616 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:43.285623 ignition[1207]: PUT result: OK Sep 4 17:10:43.286252 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:10:43.297380 ignition[1207]: GET result: OK Sep 4 17:10:43.297537 ignition[1207]: parsing config with SHA512: f461e62b2b74076a13c56cc9b6bb7e084494c77bf37c0536d9d3f70f9b21d76ab778fc40b4d58406279393dae69aae3efb7ff6aecd9a80d855d663bb408f0d6d Sep 4 17:10:43.304837 unknown[1207]: fetched base config from "system" Sep 4 17:10:43.304854 unknown[1207]: fetched base config from "system" Sep 4 17:10:43.314121 ignition[1207]: fetch: fetch complete Sep 4 17:10:43.304867 unknown[1207]: fetched user config from "aws" Sep 4 17:10:43.314140 ignition[1207]: fetch: fetch passed Sep 4 17:10:43.320954 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:10:43.314236 ignition[1207]: Ignition finished successfully Sep 4 17:10:43.341410 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:10:43.370812 ignition[1214]: Ignition 2.18.0 Sep 4 17:10:43.371349 ignition[1214]: Stage: kargs Sep 4 17:10:43.371946 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:43.371971 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:43.372143 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:43.375056 ignition[1214]: PUT result: OK Sep 4 17:10:43.386845 ignition[1214]: kargs: kargs passed Sep 4 17:10:43.388479 ignition[1214]: Ignition finished successfully Sep 4 17:10:43.393072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:10:43.408553 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:10:43.433295 ignition[1221]: Ignition 2.18.0 Sep 4 17:10:43.433317 ignition[1221]: Stage: disks Sep 4 17:10:43.433935 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:43.433959 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:43.434134 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:43.436899 ignition[1221]: PUT result: OK Sep 4 17:10:43.450077 ignition[1221]: disks: disks passed Sep 4 17:10:43.450222 ignition[1221]: Ignition finished successfully Sep 4 17:10:43.455027 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:10:43.455744 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:10:43.457017 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:10:43.457457 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:10:43.457768 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:10:43.458107 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:10:43.473719 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:10:43.528695 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:10:43.537394 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:10:43.550587 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:10:43.642118 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:10:43.643023 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:10:43.647544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:10:43.684262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:10:43.692308 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:10:43.700887 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:10:43.704436 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:10:43.722274 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Sep 4 17:10:43.704491 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:10:43.715271 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:10:43.731646 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:43.731708 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:43.731746 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:10:43.741144 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:10:43.742382 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:10:43.753922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:10:44.172191 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:10:44.191667 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:10:44.212292 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:10:44.222908 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:10:44.525187 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:10:44.537300 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:10:44.543322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:10:44.568397 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:10:44.574173 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:44.599975 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:10:44.624567 ignition[1362]: INFO : Ignition 2.18.0 Sep 4 17:10:44.627916 ignition[1362]: INFO : Stage: mount Sep 4 17:10:44.627916 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:44.627916 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:44.627916 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:44.640084 ignition[1362]: INFO : PUT result: OK Sep 4 17:10:44.644046 ignition[1362]: INFO : mount: mount passed Sep 4 17:10:44.644046 ignition[1362]: INFO : Ignition finished successfully Sep 4 17:10:44.650826 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:10:44.671441 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:10:44.689436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:10:44.713566 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Sep 4 17:10:44.713630 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:10:44.713673 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:10:44.716117 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:10:44.720126 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:10:44.723461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:10:44.769581 ignition[1391]: INFO : Ignition 2.18.0 Sep 4 17:10:44.769581 ignition[1391]: INFO : Stage: files Sep 4 17:10:44.775053 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:44.775053 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:44.775053 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:44.775053 ignition[1391]: INFO : PUT result: OK Sep 4 17:10:44.791942 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:10:44.796734 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:10:44.796734 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:10:44.808387 systemd-networkd[1196]: eth0: Gained IPv6LL Sep 4 17:10:44.830665 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:10:44.834394 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:10:44.839393 unknown[1391]: wrote ssh authorized keys file for user: core Sep 4 17:10:44.842570 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:10:44.854708 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:10:44.854708 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:10:44.966305 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:10:45.066190 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:10:45.066190 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:10:45.078466 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:10:52.961014 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:10:53.330907 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:10:53.330907 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:10:53.346305 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:10:53.351366 ignition[1391]: INFO : files: files passed Sep 4 17:10:53.351366 ignition[1391]: INFO : Ignition finished successfully Sep 4 17:10:53.383153 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:10:53.405619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:10:53.414385 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:10:53.435345 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:10:53.436166 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:10:53.455845 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:10:53.455845 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:10:53.466199 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:10:53.473342 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:10:53.478454 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:10:53.498567 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:10:53.570374 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:10:53.570812 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:10:53.580995 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:10:53.585490 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:10:53.587778 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:10:53.604317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:10:53.635570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:10:53.659543 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:10:53.681708 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:10:53.684872 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:10:53.693657 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:10:53.698539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:10:53.698789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:10:53.702209 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:10:53.711677 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:10:53.714038 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:10:53.716777 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:10:53.719686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:10:53.722789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:10:53.725412 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:10:53.728548 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:10:53.731261 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:10:53.733890 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:10:53.736034 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:10:53.736312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:10:53.739476 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:10:53.770408 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:10:53.773392 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:10:53.778765 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:10:53.781954 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:10:53.782235 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:10:53.788667 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:10:53.789782 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:10:53.793787 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:10:53.794156 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:10:53.827366 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:10:53.834189 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:10:53.838289 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:10:53.839568 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:10:53.843583 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:10:53.843814 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:10:53.870722 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:10:53.874424 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:10:53.898141 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:10:53.900995 ignition[1445]: INFO : Ignition 2.18.0 Sep 4 17:10:53.900995 ignition[1445]: INFO : Stage: umount Sep 4 17:10:53.906524 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:10:53.906524 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:10:53.906524 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:10:53.920006 ignition[1445]: INFO : PUT result: OK Sep 4 17:10:53.913654 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:10:53.913909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:10:53.928759 ignition[1445]: INFO : umount: umount passed Sep 4 17:10:53.930578 ignition[1445]: INFO : Ignition finished successfully Sep 4 17:10:53.936284 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:10:53.936733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:10:53.945962 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:10:53.946072 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:10:53.953437 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:10:53.953553 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:10:53.956040 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:10:53.956165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:10:53.958502 systemd[1]: Stopped target network.target - Network. Sep 4 17:10:53.960500 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:10:53.960591 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:10:53.963365 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:10:53.965420 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:10:53.985363 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:10:53.988173 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:10:53.994285 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:10:53.996197 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:10:53.996279 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:10:53.998936 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:10:53.999020 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:10:54.005360 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:10:54.005453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:10:54.007854 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:10:54.007929 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:10:54.011029 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:10:54.011158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:10:54.036021 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:10:54.038395 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:10:54.052199 systemd-networkd[1196]: eth0: DHCPv6 lease lost Sep 4 17:10:54.054632 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:10:54.054830 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:10:54.063722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:10:54.063992 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:10:54.073952 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:10:54.074338 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:10:54.084239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:10:54.084355 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:10:54.103356 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:10:54.107587 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:10:54.107725 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:10:54.112861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:10:54.112969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:10:54.115247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:10:54.115353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:10:54.117872 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:10:54.146600 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:10:54.149675 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:10:54.154799 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:10:54.155044 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:10:54.165143 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:10:54.165279 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:10:54.168348 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:10:54.168444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:10:54.171372 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:10:54.171499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:10:54.191286 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:10:54.191405 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:10:54.196644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:10:54.196774 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:10:54.225601 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:10:54.228741 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:10:54.228891 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:10:54.232685 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:10:54.232799 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:10:54.236363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:10:54.236465 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:10:54.239857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:10:54.239963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:10:54.282876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:10:54.283328 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:10:54.289497 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:10:54.300414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:10:54.324142 systemd[1]: Switching root. Sep 4 17:10:54.364370 systemd-journald[251]: Journal stopped Sep 4 17:10:57.453570 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 4 17:10:57.453714 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:10:57.453766 kernel: SELinux: policy capability open_perms=1 Sep 4 17:10:57.453797 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:10:57.453836 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:10:57.453870 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:10:57.453901 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:10:57.453931 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:10:57.453960 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:10:57.454001 kernel: audit: type=1403 audit(1725469855.398:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:10:57.454040 systemd[1]: Successfully loaded SELinux policy in 60.754ms. Sep 4 17:10:57.454084 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.490ms. Sep 4 17:10:57.454169 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:10:57.454212 systemd[1]: Detected virtualization amazon. Sep 4 17:10:57.454243 systemd[1]: Detected architecture arm64. Sep 4 17:10:57.454273 systemd[1]: Detected first boot. Sep 4 17:10:57.454341 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:10:57.454382 zram_generator::config[1487]: No configuration found. Sep 4 17:10:57.454418 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:10:57.454451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:10:57.454482 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:10:57.454519 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:10:57.454554 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:10:57.454598 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:10:57.454631 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:10:57.454663 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:10:57.454695 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:10:57.454727 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:10:57.454757 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:10:57.454789 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:10:57.454823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:10:57.454853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:10:57.454883 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:10:57.454915 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:10:57.454946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:10:57.454987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:10:57.455018 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:10:57.457171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:10:57.457255 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:10:57.457300 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:10:57.457335 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:10:57.457370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:10:57.457403 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:10:57.457436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:10:57.457470 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:10:57.457507 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:10:57.457537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:10:57.457605 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:10:57.457645 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:10:57.457680 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:10:57.457715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:10:57.457751 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:10:57.457786 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:10:57.457822 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:10:57.457867 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:10:57.457899 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:10:57.457941 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:10:57.457981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:10:57.458022 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:10:57.458057 systemd[1]: Reached target machines.target - Containers. Sep 4 17:10:57.460180 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:10:57.460269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:10:57.460306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:10:57.460337 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:10:57.460378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:10:57.460412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:10:57.460447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:10:57.460477 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:10:57.460508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:10:57.460542 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:10:57.460574 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:10:57.460609 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:10:57.460649 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:10:57.460681 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:10:57.460715 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:10:57.460748 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:10:57.460780 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:10:57.460811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:10:57.460844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:10:57.460879 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:10:57.460911 systemd[1]: Stopped verity-setup.service. Sep 4 17:10:57.460943 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:10:57.460983 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:10:57.461014 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:10:57.461044 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:10:57.461076 kernel: fuse: init (API version 7.39) Sep 4 17:10:57.461197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:10:57.461235 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:10:57.461269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:10:57.461300 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:10:57.461331 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:10:57.461362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:10:57.461394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:10:57.461428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:10:57.461462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:10:57.461505 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:10:57.461539 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:10:57.461601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:10:57.461644 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:10:57.461686 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:10:57.461727 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:10:57.461827 systemd-journald[1575]: Collecting audit messages is disabled. Sep 4 17:10:57.461884 kernel: loop: module loaded Sep 4 17:10:57.461916 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:10:57.461945 systemd-journald[1575]: Journal started Sep 4 17:10:57.461995 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2e9bea513a2f3438b48baaf5dece4f) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:10:57.470392 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:10:56.711308 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:10:56.798183 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:10:56.799167 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:10:57.488157 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:10:57.488251 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:10:57.494171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:10:57.514160 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:10:57.530246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:10:57.537161 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:10:57.550198 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:10:57.558776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:10:57.564007 kernel: ACPI: bus type drm_connector registered Sep 4 17:10:57.564152 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:10:57.575152 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:10:57.590151 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:10:57.601198 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:10:57.609041 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:10:57.615523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:10:57.621055 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:10:57.621539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:10:57.628246 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:10:57.633829 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:10:57.639817 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:10:57.646312 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:10:57.707930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:10:57.728484 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:10:57.746231 kernel: loop0: detected capacity change from 0 to 51896 Sep 4 17:10:57.747740 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:10:57.753170 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:10:57.763989 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:10:57.768945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:10:57.771344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:10:57.804156 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Sep 4 17:10:57.805477 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Sep 4 17:10:57.830261 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:10:57.835306 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2e9bea513a2f3438b48baaf5dece4f is 143.084ms for 916 entries. Sep 4 17:10:57.835306 systemd-journald[1575]: System Journal (/var/log/journal/ec2e9bea513a2f3438b48baaf5dece4f) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:10:58.026672 systemd-journald[1575]: Received client request to flush runtime journal. Sep 4 17:10:58.026788 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:10:58.026845 kernel: loop1: detected capacity change from 0 to 59688 Sep 4 17:10:57.860142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:10:57.945206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:10:57.970913 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:10:57.997494 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:10:58.042259 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:10:58.046263 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:10:58.055732 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:10:58.072650 udevadm[1634]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:10:58.101635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:10:58.113740 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:10:58.117509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:10:58.194440 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Sep 4 17:10:58.194476 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Sep 4 17:10:58.213467 kernel: loop3: detected capacity change from 0 to 193208 Sep 4 17:10:58.222697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:10:58.286254 kernel: loop4: detected capacity change from 0 to 51896 Sep 4 17:10:58.310445 kernel: loop5: detected capacity change from 0 to 59688 Sep 4 17:10:58.332594 kernel: loop6: detected capacity change from 0 to 113672 Sep 4 17:10:58.346394 kernel: loop7: detected capacity change from 0 to 193208 Sep 4 17:10:58.374201 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:10:58.375244 (sd-merge)[1646]: Merged extensions into '/usr'. Sep 4 17:10:58.388124 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:10:58.388188 systemd[1]: Reloading... Sep 4 17:10:58.621584 zram_generator::config[1670]: No configuration found. Sep 4 17:10:59.040662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:10:59.173774 systemd[1]: Reloading finished in 784 ms. Sep 4 17:10:59.223748 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:10:59.232216 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:10:59.254706 systemd[1]: Starting ensure-sysext.service... Sep 4 17:10:59.275332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:10:59.284481 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:10:59.301467 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:10:59.301514 systemd[1]: Reloading... Sep 4 17:10:59.370087 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:10:59.370843 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:10:59.376648 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:10:59.379409 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Sep 4 17:10:59.379601 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Sep 4 17:10:59.396756 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:10:59.396790 systemd-tmpfiles[1723]: Skipping /boot Sep 4 17:10:59.402746 ldconfig[1593]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:10:59.404595 systemd-udevd[1724]: Using default interface naming scheme 'v255'. Sep 4 17:10:59.448623 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:10:59.450169 systemd-tmpfiles[1723]: Skipping /boot Sep 4 17:10:59.486987 zram_generator::config[1749]: No configuration found. Sep 4 17:10:59.703645 (udev-worker)[1796]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:10:59.721318 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1755) Sep 4 17:10:59.932430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:00.088270 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:11:00.090321 systemd[1]: Reloading finished in 787 ms. Sep 4 17:11:00.093222 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1812) Sep 4 17:11:00.127726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:00.137198 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:11:00.145260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:11:00.210801 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:00.221291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:11:00.235226 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:11:00.255899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:11:00.267814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:11:00.280018 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:11:00.288805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:00.304928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:00.310766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:00.320743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:00.331680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:00.341721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:00.359319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:00.368658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:00.373039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:00.373484 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:11:00.389510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:11:00.395546 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:11:00.410653 systemd[1]: Finished ensure-sysext.service. Sep 4 17:11:00.430059 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:11:00.446446 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:11:00.451711 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:11:00.485005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:11:00.510277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:11:00.547724 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:11:00.570795 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:11:00.613934 augenrules[1952]: No rules Sep 4 17:11:00.634028 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:11:00.648240 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:00.657941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:00.658290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:00.663555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:00.665464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:00.671937 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:00.672312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:00.678007 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:00.678403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:00.683906 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:11:00.710649 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:11:00.715663 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:00.715836 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:00.716715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:00.735024 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:11:00.764470 lvm[1967]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:00.806085 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:11:00.812397 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:00.823650 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:11:00.856190 lvm[1976]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:00.881979 systemd-networkd[1895]: lo: Link UP Sep 4 17:11:00.881997 systemd-networkd[1895]: lo: Gained carrier Sep 4 17:11:00.885674 systemd-networkd[1895]: Enumeration completed Sep 4 17:11:00.886132 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:11:00.886996 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:00.887017 systemd-networkd[1895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:11:00.890815 systemd-networkd[1895]: eth0: Link UP Sep 4 17:11:00.891563 systemd-networkd[1895]: eth0: Gained carrier Sep 4 17:11:00.891757 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:00.903310 systemd-networkd[1895]: eth0: DHCPv4 address 172.31.30.239/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:11:00.905495 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:11:00.932798 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:11:00.963869 systemd-resolved[1898]: Positive Trust Anchors: Sep 4 17:11:00.963919 systemd-resolved[1898]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:11:00.963983 systemd-resolved[1898]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:11:00.971844 systemd-resolved[1898]: Defaulting to hostname 'linux'. Sep 4 17:11:00.975010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:11:00.977817 systemd[1]: Reached target network.target - Network. Sep 4 17:11:00.979956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:00.982984 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:11:00.985661 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:11:00.988519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:11:00.991724 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:11:00.994555 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:11:00.997667 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:11:01.000513 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:11:01.000587 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:11:01.002724 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:11:01.006714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:11:01.012057 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:11:01.025256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:11:01.029029 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:11:01.032081 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:11:01.034767 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:11:01.037540 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:01.037596 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:01.047424 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:11:01.054997 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:11:01.067021 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:11:01.084110 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:11:01.093459 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:11:01.097939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:11:01.109375 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:11:01.120252 jq[1985]: false Sep 4 17:11:01.128677 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:11:01.142179 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:11:01.150397 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:11:01.157484 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:11:01.167709 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:11:01.180483 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:11:01.184297 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:11:01.186264 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:11:01.188390 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:11:01.196377 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:11:01.202842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:11:01.204232 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:11:01.213284 extend-filesystems[1986]: Found loop4 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found loop5 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found loop6 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found loop7 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p1 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p2 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p3 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found usr Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p4 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p6 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p7 Sep 4 17:11:01.213284 extend-filesystems[1986]: Found nvme0n1p9 Sep 4 17:11:01.277760 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 Sep 4 17:11:01.260981 dbus-daemon[1984]: [system] SELinux support is enabled Sep 4 17:11:01.271367 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1895 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:11:01.283038 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:11:01.291714 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:11:01.294214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:11:01.294276 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:11:01.297341 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:11:01.297380 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:11:01.304607 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:11:01.330416 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:11:01.386602 jq[1998]: true Sep 4 17:11:01.386610 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:11:01.388280 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:11:01.390531 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 Sep 4 17:11:01.411827 tar[2003]: linux-arm64/helm Sep 4 17:11:01.415543 extend-filesystems[2026]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:11:01.423399 update_engine[1997]: I0904 17:11:01.423254 1997 main.cc:92] Flatcar Update Engine starting Sep 4 17:11:01.449475 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:11:01.433878 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:11:01.449680 update_engine[1997]: I0904 17:11:01.433924 1997 update_check_scheduler.cc:74] Next update check in 8m22s Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: ---------------------------------------------------- Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: corporation. Support and training for ntp-4 are Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: available at https://www.nwtime.org/support Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: ---------------------------------------------------- Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: proto: precision = 0.108 usec (-23) Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: basedate set to 2024-08-23 Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: gps base set to 2024-08-25 (week 2329) Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Listen normally on 3 eth0 172.31.30.239:123 Sep 4 17:11:01.449760 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Listen normally on 4 lo [::1]:123 Sep 4 17:11:01.437285 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:11:01.437339 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:11:01.453785 (ntainerd)[2019]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:11:01.459170 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: bind(21) AF_INET6 fe80::4a8:3bff:fecb:2eb1%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:01.459170 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: unable to create socket on eth0 (5) for fe80::4a8:3bff:fecb:2eb1%2#123 Sep 4 17:11:01.459170 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: failed to init interface for address fe80::4a8:3bff:fecb:2eb1%2 Sep 4 17:11:01.459170 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Sep 4 17:11:01.437360 ntpd[1988]: ---------------------------------------------------- Sep 4 17:11:01.454879 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:11:01.437379 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:11:01.437397 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:11:01.437416 ntpd[1988]: corporation. Support and training for ntp-4 are Sep 4 17:11:01.437434 ntpd[1988]: available at https://www.nwtime.org/support Sep 4 17:11:01.437453 ntpd[1988]: ---------------------------------------------------- Sep 4 17:11:01.440445 ntpd[1988]: proto: precision = 0.108 usec (-23) Sep 4 17:11:01.442837 ntpd[1988]: basedate set to 2024-08-23 Sep 4 17:11:01.442873 ntpd[1988]: gps base set to 2024-08-25 (week 2329) Sep 4 17:11:01.446458 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:11:01.446562 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:11:01.446906 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:11:01.446992 ntpd[1988]: Listen normally on 3 eth0 172.31.30.239:123 Sep 4 17:11:01.447076 ntpd[1988]: Listen normally on 4 lo [::1]:123 Sep 4 17:11:01.451233 ntpd[1988]: bind(21) AF_INET6 fe80::4a8:3bff:fecb:2eb1%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:01.451284 ntpd[1988]: unable to create socket on eth0 (5) for fe80::4a8:3bff:fecb:2eb1%2#123 Sep 4 17:11:01.451312 ntpd[1988]: failed to init interface for address fe80::4a8:3bff:fecb:2eb1%2 Sep 4 17:11:01.451374 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Sep 4 17:11:01.467170 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:01.468266 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:01.468266 ntpd[1988]: 4 Sep 17:11:01 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:01.467229 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:01.495540 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:11:01.497190 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:11:01.537885 jq[2027]: true Sep 4 17:11:01.602138 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:11:01.659415 systemd-logind[1996]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:11:01.670461 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:11:01.670461 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:11:01.670461 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:11:01.659481 systemd-logind[1996]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:11:01.710745 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:11:01.662054 systemd-logind[1996]: New seat seat0. Sep 4 17:11:01.672343 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:11:01.674253 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:11:01.689460 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:11:01.695856 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:11:01.722734 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1812) Sep 4 17:11:01.805733 coreos-metadata[1983]: Sep 04 17:11:01.802 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:11:01.811160 coreos-metadata[1983]: Sep 04 17:11:01.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:11:01.811160 coreos-metadata[1983]: Sep 04 17:11:01.809 INFO Fetch successful Sep 4 17:11:01.811160 coreos-metadata[1983]: Sep 04 17:11:01.809 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:11:01.811525 coreos-metadata[1983]: Sep 04 17:11:01.811 INFO Fetch successful Sep 4 17:11:01.811591 coreos-metadata[1983]: Sep 04 17:11:01.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:11:01.812366 coreos-metadata[1983]: Sep 04 17:11:01.812 INFO Fetch successful Sep 4 17:11:01.812492 coreos-metadata[1983]: Sep 04 17:11:01.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:11:01.816959 coreos-metadata[1983]: Sep 04 17:11:01.815 INFO Fetch successful Sep 4 17:11:01.816959 coreos-metadata[1983]: Sep 04 17:11:01.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:11:01.816959 coreos-metadata[1983]: Sep 04 17:11:01.816 INFO Fetch failed with 404: resource not found Sep 4 17:11:01.816959 coreos-metadata[1983]: Sep 04 17:11:01.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:11:01.822144 coreos-metadata[1983]: Sep 04 17:11:01.820 INFO Fetch successful Sep 4 17:11:01.822144 coreos-metadata[1983]: Sep 04 17:11:01.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:11:01.826532 coreos-metadata[1983]: Sep 04 17:11:01.826 INFO Fetch successful Sep 4 17:11:01.826532 coreos-metadata[1983]: Sep 04 17:11:01.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:11:01.827505 coreos-metadata[1983]: Sep 04 17:11:01.827 INFO Fetch successful Sep 4 17:11:01.827505 coreos-metadata[1983]: Sep 04 17:11:01.827 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:11:01.830331 coreos-metadata[1983]: Sep 04 17:11:01.830 INFO Fetch successful Sep 4 17:11:01.830331 coreos-metadata[1983]: Sep 04 17:11:01.830 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:11:01.834905 coreos-metadata[1983]: Sep 04 17:11:01.831 INFO Fetch successful Sep 4 17:11:01.850079 bash[2079]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:01.858317 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:11:01.871165 systemd[1]: Starting sshkeys.service... Sep 4 17:11:01.912398 locksmithd[2030]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:11:01.957216 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:11:01.962761 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:11:01.991500 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:11:02.004817 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:11:02.014603 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2010 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:11:02.038084 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:11:02.056010 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:11:02.075938 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:11:02.212432 polkitd[2131]: Started polkitd version 121 Sep 4 17:11:02.244595 polkitd[2131]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:11:02.244736 polkitd[2131]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:11:02.255168 polkitd[2131]: Finished loading, compiling and executing 2 rules Sep 4 17:11:02.265992 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:11:02.266617 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:11:02.280248 polkitd[2131]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:11:02.328321 coreos-metadata[2119]: Sep 04 17:11:02.328 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:11:02.331416 coreos-metadata[2119]: Sep 04 17:11:02.331 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:11:02.332331 coreos-metadata[2119]: Sep 04 17:11:02.332 INFO Fetch successful Sep 4 17:11:02.332331 coreos-metadata[2119]: Sep 04 17:11:02.332 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:11:02.340942 coreos-metadata[2119]: Sep 04 17:11:02.335 INFO Fetch successful Sep 4 17:11:02.344647 unknown[2119]: wrote ssh authorized keys file for user: core Sep 4 17:11:02.409961 systemd-hostnamed[2010]: Hostname set to (transient) Sep 4 17:11:02.412713 systemd-resolved[1898]: System hostname changed to 'ip-172-31-30-239'. Sep 4 17:11:02.438047 ntpd[1988]: bind(24) AF_INET6 fe80::4a8:3bff:fecb:2eb1%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:02.440814 ntpd[1988]: 4 Sep 17:11:02 ntpd[1988]: bind(24) AF_INET6 fe80::4a8:3bff:fecb:2eb1%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:02.440814 ntpd[1988]: 4 Sep 17:11:02 ntpd[1988]: unable to create socket on eth0 (6) for fe80::4a8:3bff:fecb:2eb1%2#123 Sep 4 17:11:02.440814 ntpd[1988]: 4 Sep 17:11:02 ntpd[1988]: failed to init interface for address fe80::4a8:3bff:fecb:2eb1%2 Sep 4 17:11:02.438139 ntpd[1988]: unable to create socket on eth0 (6) for fe80::4a8:3bff:fecb:2eb1%2#123 Sep 4 17:11:02.438174 ntpd[1988]: failed to init interface for address fe80::4a8:3bff:fecb:2eb1%2 Sep 4 17:11:02.479567 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:02.492278 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:11:02.505267 systemd[1]: Finished sshkeys.service. Sep 4 17:11:02.577142 containerd[2019]: time="2024-09-04T17:11:02.574689559Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:11:02.678825 containerd[2019]: time="2024-09-04T17:11:02.678462764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:11:02.678825 containerd[2019]: time="2024-09-04T17:11:02.678543656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.684822 containerd[2019]: time="2024-09-04T17:11:02.684750272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:02.685550 containerd[2019]: time="2024-09-04T17:11:02.685499240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.686593 containerd[2019]: time="2024-09-04T17:11:02.686540768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:02.689154 containerd[2019]: time="2024-09-04T17:11:02.688136396Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:11:02.689154 containerd[2019]: time="2024-09-04T17:11:02.688370888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.689154 containerd[2019]: time="2024-09-04T17:11:02.688485308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:02.689154 containerd[2019]: time="2024-09-04T17:11:02.688522652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.689154 containerd[2019]: time="2024-09-04T17:11:02.688671872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.689154 containerd[2019]: time="2024-09-04T17:11:02.689063624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.692912 containerd[2019]: time="2024-09-04T17:11:02.691419728Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:11:02.692912 containerd[2019]: time="2024-09-04T17:11:02.691468076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:02.692912 containerd[2019]: time="2024-09-04T17:11:02.691723640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:02.692912 containerd[2019]: time="2024-09-04T17:11:02.692144816Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:11:02.692912 containerd[2019]: time="2024-09-04T17:11:02.692808932Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:11:02.692912 containerd[2019]: time="2024-09-04T17:11:02.692839736Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:11:02.705763 containerd[2019]: time="2024-09-04T17:11:02.705682064Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:11:02.705763 containerd[2019]: time="2024-09-04T17:11:02.705765800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:11:02.705958 containerd[2019]: time="2024-09-04T17:11:02.705804236Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:11:02.705958 containerd[2019]: time="2024-09-04T17:11:02.705879308Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:11:02.705958 containerd[2019]: time="2024-09-04T17:11:02.705919664Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:11:02.705958 containerd[2019]: time="2024-09-04T17:11:02.705948572Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:11:02.706199 containerd[2019]: time="2024-09-04T17:11:02.706018508Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:11:02.707472 containerd[2019]: time="2024-09-04T17:11:02.707387048Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:11:02.707586 containerd[2019]: time="2024-09-04T17:11:02.707474948Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:11:02.707586 containerd[2019]: time="2024-09-04T17:11:02.707511536Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:11:02.707586 containerd[2019]: time="2024-09-04T17:11:02.707545616Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:11:02.707767 containerd[2019]: time="2024-09-04T17:11:02.707593544Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707767 containerd[2019]: time="2024-09-04T17:11:02.707638556Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707767 containerd[2019]: time="2024-09-04T17:11:02.707673440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707767 containerd[2019]: time="2024-09-04T17:11:02.707707568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707767 containerd[2019]: time="2024-09-04T17:11:02.707740724Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707984 containerd[2019]: time="2024-09-04T17:11:02.707772284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707984 containerd[2019]: time="2024-09-04T17:11:02.707804612Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.707984 containerd[2019]: time="2024-09-04T17:11:02.707835152Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:11:02.710209 containerd[2019]: time="2024-09-04T17:11:02.708123344Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:11:02.710726 containerd[2019]: time="2024-09-04T17:11:02.710644652Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:11:02.710825 containerd[2019]: time="2024-09-04T17:11:02.710742812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.710825 containerd[2019]: time="2024-09-04T17:11:02.710780588Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:11:02.710951 containerd[2019]: time="2024-09-04T17:11:02.710840588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:11:02.711122 containerd[2019]: time="2024-09-04T17:11:02.710973248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711171140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711254336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711289856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711321848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711355268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711386108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711415292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711450104Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711767264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711808172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711837428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711870968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711901976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711934232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713252 containerd[2019]: time="2024-09-04T17:11:02.711966188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.713992 containerd[2019]: time="2024-09-04T17:11:02.711993356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:11:02.714053 containerd[2019]: time="2024-09-04T17:11:02.712506932Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:11:02.714053 containerd[2019]: time="2024-09-04T17:11:02.712621376Z" level=info msg="Connect containerd service" Sep 4 17:11:02.714053 containerd[2019]: time="2024-09-04T17:11:02.712682048Z" level=info msg="using legacy CRI server" Sep 4 17:11:02.714053 containerd[2019]: time="2024-09-04T17:11:02.712700732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:11:02.714053 containerd[2019]: time="2024-09-04T17:11:02.712876532Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:11:02.719718 containerd[2019]: time="2024-09-04T17:11:02.719167388Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:11:02.719718 containerd[2019]: time="2024-09-04T17:11:02.719277368Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:11:02.719718 containerd[2019]: time="2024-09-04T17:11:02.719318108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:11:02.719718 containerd[2019]: time="2024-09-04T17:11:02.719345372Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:11:02.719718 containerd[2019]: time="2024-09-04T17:11:02.719376344Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:11:02.720273 containerd[2019]: time="2024-09-04T17:11:02.720160916Z" level=info msg="Start subscribing containerd event" Sep 4 17:11:02.720423 containerd[2019]: time="2024-09-04T17:11:02.720290516Z" level=info msg="Start recovering state" Sep 4 17:11:02.720480 containerd[2019]: time="2024-09-04T17:11:02.720434420Z" level=info msg="Start event monitor" Sep 4 17:11:02.720480 containerd[2019]: time="2024-09-04T17:11:02.720465620Z" level=info msg="Start snapshots syncer" Sep 4 17:11:02.720573 containerd[2019]: time="2024-09-04T17:11:02.720488384Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:11:02.720573 containerd[2019]: time="2024-09-04T17:11:02.720507524Z" level=info msg="Start streaming server" Sep 4 17:11:02.722170 containerd[2019]: time="2024-09-04T17:11:02.720912344Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:11:02.722170 containerd[2019]: time="2024-09-04T17:11:02.721043312Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:11:02.722878 containerd[2019]: time="2024-09-04T17:11:02.722494364Z" level=info msg="containerd successfully booted in 0.156566s" Sep 4 17:11:02.722716 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:11:02.920276 systemd-networkd[1895]: eth0: Gained IPv6LL Sep 4 17:11:02.928298 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:11:02.934438 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:11:02.947737 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:11:02.962462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:02.977777 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:11:03.098980 amazon-ssm-agent[2189]: Initializing new seelog logger Sep 4 17:11:03.098980 amazon-ssm-agent[2189]: New Seelog Logger Creation Complete Sep 4 17:11:03.099647 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.099647 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.100631 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 processing appconfig overrides Sep 4 17:11:03.100429 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:11:03.106740 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.106740 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.106887 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 processing appconfig overrides Sep 4 17:11:03.108336 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.108336 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.108336 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 processing appconfig overrides Sep 4 17:11:03.110158 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO Proxy environment variables: Sep 4 17:11:03.117618 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.117618 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:03.117618 amazon-ssm-agent[2189]: 2024/09/04 17:11:03 processing appconfig overrides Sep 4 17:11:03.213287 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO https_proxy: Sep 4 17:11:03.272269 tar[2003]: linux-arm64/LICENSE Sep 4 17:11:03.272802 tar[2003]: linux-arm64/README.md Sep 4 17:11:03.304177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:11:03.312986 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO http_proxy: Sep 4 17:11:03.415201 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO no_proxy: Sep 4 17:11:03.473920 sshd_keygen[2032]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:11:03.513924 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:11:03.566464 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:11:03.586254 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:11:03.600651 systemd[1]: Started sshd@0-172.31.30.239:22-139.178.89.65:49384.service - OpenSSH per-connection server daemon (139.178.89.65:49384). Sep 4 17:11:03.612849 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:11:03.631836 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:11:03.632870 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:11:03.652338 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:11:03.701701 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:11:03.714141 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO Agent will take identity from EC2 Sep 4 17:11:03.722475 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:11:03.745998 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:11:03.751550 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:11:03.813189 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:03.874287 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 49384 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:03.877896 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:03.906011 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:11:03.916507 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:03.919849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:11:03.933623 systemd-logind[1996]: New session 1 of user core. Sep 4 17:11:03.967470 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:11:03.983816 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:11:04.007743 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:04.013640 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:04.114222 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:11:04.215080 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:11:04.265782 systemd[2230]: Queued start job for default target default.target. Sep 4 17:11:04.278049 systemd[2230]: Created slice app.slice - User Application Slice. Sep 4 17:11:04.278610 systemd[2230]: Reached target paths.target - Paths. Sep 4 17:11:04.278658 systemd[2230]: Reached target timers.target - Timers. Sep 4 17:11:04.281843 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:11:04.315665 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:11:04.319279 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:11:04.319567 systemd[2230]: Reached target sockets.target - Sockets. Sep 4 17:11:04.319622 systemd[2230]: Reached target basic.target - Basic System. Sep 4 17:11:04.319714 systemd[2230]: Reached target default.target - Main User Target. Sep 4 17:11:04.319780 systemd[2230]: Startup finished in 297ms. Sep 4 17:11:04.319984 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:11:04.334438 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:11:04.417340 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:11:04.516196 systemd[1]: Started sshd@1-172.31.30.239:22-139.178.89.65:49400.service - OpenSSH per-connection server daemon (139.178.89.65:49400). Sep 4 17:11:04.519549 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [Registrar] Starting registrar module Sep 4 17:11:04.619563 amazon-ssm-agent[2189]: 2024-09-04 17:11:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:11:04.627634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:04.632612 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:11:04.635528 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:04.635750 systemd[1]: Startup finished in 1.139s (kernel) + 16.590s (initrd) + 9.296s (userspace) = 27.026s. Sep 4 17:11:04.727026 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 49400 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:04.731011 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:04.744431 systemd-logind[1996]: New session 2 of user core. Sep 4 17:11:04.756427 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:11:04.891389 sshd[2242]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:04.903888 systemd[1]: sshd@1-172.31.30.239:22-139.178.89.65:49400.service: Deactivated successfully. Sep 4 17:11:04.908651 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:11:04.911282 systemd-logind[1996]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:11:04.933699 systemd[1]: Started sshd@2-172.31.30.239:22-139.178.89.65:49402.service - OpenSSH per-connection server daemon (139.178.89.65:49402). Sep 4 17:11:04.936775 systemd-logind[1996]: Removed session 2. Sep 4 17:11:05.113378 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 49402 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:05.116643 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:05.128689 systemd-logind[1996]: New session 3 of user core. Sep 4 17:11:05.136479 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:11:05.259574 sshd[2263]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:05.270262 systemd[1]: sshd@2-172.31.30.239:22-139.178.89.65:49402.service: Deactivated successfully. Sep 4 17:11:05.275087 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:11:05.278017 systemd-logind[1996]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:11:05.294329 systemd-logind[1996]: Removed session 3. Sep 4 17:11:05.300791 systemd[1]: Started sshd@3-172.31.30.239:22-139.178.89.65:49410.service - OpenSSH per-connection server daemon (139.178.89.65:49410). Sep 4 17:11:05.438065 ntpd[1988]: Listen normally on 7 eth0 [fe80::4a8:3bff:fecb:2eb1%2]:123 Sep 4 17:11:05.439734 ntpd[1988]: 4 Sep 17:11:05 ntpd[1988]: Listen normally on 7 eth0 [fe80::4a8:3bff:fecb:2eb1%2]:123 Sep 4 17:11:05.487843 sshd[2270]: Accepted publickey for core from 139.178.89.65 port 49410 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:05.491416 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:05.504673 systemd-logind[1996]: New session 4 of user core. Sep 4 17:11:05.510411 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:11:05.599404 kubelet[2249]: E0904 17:11:05.599178 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:05.605394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:05.605818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:05.606660 systemd[1]: kubelet.service: Consumed 1.390s CPU time. Sep 4 17:11:05.649196 sshd[2270]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:05.654939 systemd[1]: sshd@3-172.31.30.239:22-139.178.89.65:49410.service: Deactivated successfully. Sep 4 17:11:05.658547 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:11:05.662442 systemd-logind[1996]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:11:05.664789 systemd-logind[1996]: Removed session 4. Sep 4 17:11:05.704143 systemd[1]: Started sshd@4-172.31.30.239:22-139.178.89.65:49424.service - OpenSSH per-connection server daemon (139.178.89.65:49424). Sep 4 17:11:05.867376 sshd[2280]: Accepted publickey for core from 139.178.89.65 port 49424 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:05.870468 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:05.878710 systemd-logind[1996]: New session 5 of user core. Sep 4 17:11:05.891581 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:11:06.035640 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:11:06.036264 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:06.057599 sudo[2283]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:06.081195 sshd[2280]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:06.088991 systemd[1]: sshd@4-172.31.30.239:22-139.178.89.65:49424.service: Deactivated successfully. Sep 4 17:11:06.093285 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:11:06.094596 systemd-logind[1996]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:11:06.096683 systemd-logind[1996]: Removed session 5. Sep 4 17:11:06.119664 systemd[1]: Started sshd@5-172.31.30.239:22-139.178.89.65:49430.service - OpenSSH per-connection server daemon (139.178.89.65:49430). Sep 4 17:11:06.296832 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 49430 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:06.299774 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:06.310497 systemd-logind[1996]: New session 6 of user core. Sep 4 17:11:06.317449 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:11:06.405409 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:11:06.427337 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:11:06.428290 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:06.434539 sudo[2292]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:06.440496 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:11:06.440496 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:11:06.440496 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:11:06.445298 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:11:06.445905 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:06.467673 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:06.489140 auditctl[2295]: No rules Sep 4 17:11:06.493052 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:11:06.493557 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:06.503753 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:06.506054 amazon-ssm-agent[2189]: 2024-09-04 17:11:06 INFO [CredentialRefresher] Next credential rotation will be in 31.616656840466668 minutes Sep 4 17:11:06.563142 augenrules[2313]: No rules Sep 4 17:11:06.564853 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:06.567327 sudo[2291]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:06.590996 sshd[2288]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:06.596447 systemd[1]: sshd@5-172.31.30.239:22-139.178.89.65:49430.service: Deactivated successfully. Sep 4 17:11:06.600631 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:11:06.605523 systemd-logind[1996]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:11:06.608287 systemd-logind[1996]: Removed session 6. Sep 4 17:11:06.637675 systemd[1]: Started sshd@6-172.31.30.239:22-139.178.89.65:49442.service - OpenSSH per-connection server daemon (139.178.89.65:49442). Sep 4 17:11:06.806918 sshd[2321]: Accepted publickey for core from 139.178.89.65 port 49442 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:06.811258 sshd[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:06.821658 systemd-logind[1996]: New session 7 of user core. Sep 4 17:11:06.825547 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:11:06.932999 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:11:06.933623 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:07.161612 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:11:07.165310 (dockerd)[2333]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:11:07.481365 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:11:07.582129 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2340) started Sep 4 17:11:07.617317 dockerd[2333]: time="2024-09-04T17:11:07.617171760Z" level=info msg="Starting up" Sep 4 17:11:07.681560 amazon-ssm-agent[2189]: 2024-09-04 17:11:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:11:08.381820 dockerd[2333]: time="2024-09-04T17:11:08.381244380Z" level=info msg="Loading containers: start." Sep 4 17:11:08.004716 systemd-resolved[1898]: Clock change detected. Flushing caches. Sep 4 17:11:08.014855 systemd-journald[1575]: Time jumped backwards, rotating. Sep 4 17:11:08.144615 kernel: Initializing XFRM netlink socket Sep 4 17:11:08.224778 (udev-worker)[2357]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:08.333708 systemd-networkd[1895]: docker0: Link UP Sep 4 17:11:08.359492 dockerd[2333]: time="2024-09-04T17:11:08.359372841Z" level=info msg="Loading containers: done." Sep 4 17:11:08.493327 dockerd[2333]: time="2024-09-04T17:11:08.493227346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:11:08.495055 dockerd[2333]: time="2024-09-04T17:11:08.494050726Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:11:08.495055 dockerd[2333]: time="2024-09-04T17:11:08.494407738Z" level=info msg="Daemon has completed initialization" Sep 4 17:11:08.545658 dockerd[2333]: time="2024-09-04T17:11:08.544778230Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:11:08.545059 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:11:09.715141 containerd[2019]: time="2024-09-04T17:11:09.714924144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:11:10.381135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107025241.mount: Deactivated successfully. Sep 4 17:11:12.333430 containerd[2019]: time="2024-09-04T17:11:12.333366781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:12.337377 containerd[2019]: time="2024-09-04T17:11:12.337326301Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599022" Sep 4 17:11:12.339804 containerd[2019]: time="2024-09-04T17:11:12.339756925Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:12.348200 containerd[2019]: time="2024-09-04T17:11:12.348138673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:12.353808 containerd[2019]: time="2024-09-04T17:11:12.353743789Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.638757185s" Sep 4 17:11:12.354021 containerd[2019]: time="2024-09-04T17:11:12.353987869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:11:12.393535 containerd[2019]: time="2024-09-04T17:11:12.393455221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:11:14.456803 containerd[2019]: time="2024-09-04T17:11:14.456743344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:14.459844 containerd[2019]: time="2024-09-04T17:11:14.459788968Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019496" Sep 4 17:11:14.461548 containerd[2019]: time="2024-09-04T17:11:14.461470084Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:14.467299 containerd[2019]: time="2024-09-04T17:11:14.467212396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:14.470057 containerd[2019]: time="2024-09-04T17:11:14.469565620Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 2.075691035s" Sep 4 17:11:14.470057 containerd[2019]: time="2024-09-04T17:11:14.469652080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:11:14.511455 containerd[2019]: time="2024-09-04T17:11:14.511142404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:11:15.423034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:11:15.433978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:15.860027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:15.870610 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:15.975086 kubelet[2558]: E0904 17:11:15.974469 2558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:15.984681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:15.985026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:16.046131 containerd[2019]: time="2024-09-04T17:11:16.046045947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:16.048449 containerd[2019]: time="2024-09-04T17:11:16.048112239Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533681" Sep 4 17:11:16.050433 containerd[2019]: time="2024-09-04T17:11:16.050265759Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:16.062628 containerd[2019]: time="2024-09-04T17:11:16.061794268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:16.069049 containerd[2019]: time="2024-09-04T17:11:16.068953264Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.557738704s" Sep 4 17:11:16.069309 containerd[2019]: time="2024-09-04T17:11:16.069248284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:11:16.120943 containerd[2019]: time="2024-09-04T17:11:16.119641588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:11:17.489372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount115304442.mount: Deactivated successfully. Sep 4 17:11:18.042462 containerd[2019]: time="2024-09-04T17:11:18.041900021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.043812 containerd[2019]: time="2024-09-04T17:11:18.043741913Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977930" Sep 4 17:11:18.045736 containerd[2019]: time="2024-09-04T17:11:18.045643361Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.049866 containerd[2019]: time="2024-09-04T17:11:18.049768637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.051381 containerd[2019]: time="2024-09-04T17:11:18.051171605Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 1.931463381s" Sep 4 17:11:18.051381 containerd[2019]: time="2024-09-04T17:11:18.051231725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:11:18.092298 containerd[2019]: time="2024-09-04T17:11:18.092236650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:11:18.617551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233019522.mount: Deactivated successfully. Sep 4 17:11:18.629421 containerd[2019]: time="2024-09-04T17:11:18.629338580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.631221 containerd[2019]: time="2024-09-04T17:11:18.631114076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:11:18.633112 containerd[2019]: time="2024-09-04T17:11:18.633027284Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.639852 containerd[2019]: time="2024-09-04T17:11:18.639763544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:18.642196 containerd[2019]: time="2024-09-04T17:11:18.641148776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 548.844194ms" Sep 4 17:11:18.642196 containerd[2019]: time="2024-09-04T17:11:18.641243888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:11:18.689646 containerd[2019]: time="2024-09-04T17:11:18.689296065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:11:19.261799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441232091.mount: Deactivated successfully. Sep 4 17:11:22.287904 containerd[2019]: time="2024-09-04T17:11:22.286718302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.316098 containerd[2019]: time="2024-09-04T17:11:22.316021943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:11:22.347115 containerd[2019]: time="2024-09-04T17:11:22.344807387Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.380927 containerd[2019]: time="2024-09-04T17:11:22.380819843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:22.383625 containerd[2019]: time="2024-09-04T17:11:22.383375819Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.694019586s" Sep 4 17:11:22.383625 containerd[2019]: time="2024-09-04T17:11:22.383441999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:11:22.429249 containerd[2019]: time="2024-09-04T17:11:22.429180515Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:11:23.240462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904966562.mount: Deactivated successfully. Sep 4 17:11:23.815775 containerd[2019]: time="2024-09-04T17:11:23.815695790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:23.818118 containerd[2019]: time="2024-09-04T17:11:23.818050274Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Sep 4 17:11:23.820264 containerd[2019]: time="2024-09-04T17:11:23.820190450Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:23.825042 containerd[2019]: time="2024-09-04T17:11:23.824946662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:23.826844 containerd[2019]: time="2024-09-04T17:11:23.826605974Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.397343367s" Sep 4 17:11:23.826844 containerd[2019]: time="2024-09-04T17:11:23.826710086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:11:26.235539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:11:26.245075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:26.857979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:26.869498 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:26.967929 kubelet[2718]: E0904 17:11:26.967800 2718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:26.971875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:26.972203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:29.634866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:29.643151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:29.688805 systemd[1]: Reloading requested from client PID 2732 ('systemctl') (unit session-7.scope)... Sep 4 17:11:29.688843 systemd[1]: Reloading... Sep 4 17:11:29.893617 zram_generator::config[2774]: No configuration found. Sep 4 17:11:30.138089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:30.315538 systemd[1]: Reloading finished in 626 ms. Sep 4 17:11:30.402314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:30.413313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:30.417377 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:11:30.419661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:30.427347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:31.007892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:31.029428 (kubelet)[2836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:11:31.116782 kubelet[2836]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:31.117246 kubelet[2836]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:11:31.117325 kubelet[2836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:31.117625 kubelet[2836]: I0904 17:11:31.117521 2836 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:11:31.987499 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:11:32.475776 kubelet[2836]: I0904 17:11:32.475734 2836 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:11:32.476424 kubelet[2836]: I0904 17:11:32.476398 2836 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:11:32.476875 kubelet[2836]: I0904 17:11:32.476847 2836 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:11:32.515818 kubelet[2836]: I0904 17:11:32.515766 2836 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:32.517706 kubelet[2836]: E0904 17:11:32.517171 2836 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.530712 kubelet[2836]: W0904 17:11:32.530657 2836 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:11:32.532142 kubelet[2836]: I0904 17:11:32.532076 2836 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:11:32.533137 kubelet[2836]: I0904 17:11:32.533082 2836 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:11:32.533467 kubelet[2836]: I0904 17:11:32.533409 2836 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:11:32.533651 kubelet[2836]: I0904 17:11:32.533507 2836 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:11:32.533651 kubelet[2836]: I0904 17:11:32.533549 2836 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:11:32.533930 kubelet[2836]: I0904 17:11:32.533884 2836 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:32.540176 kubelet[2836]: I0904 17:11:32.540118 2836 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:11:32.540314 kubelet[2836]: I0904 17:11:32.540192 2836 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:11:32.540369 kubelet[2836]: I0904 17:11:32.540313 2836 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:11:32.540369 kubelet[2836]: I0904 17:11:32.540352 2836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:11:32.544489 kubelet[2836]: W0904 17:11:32.543505 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.544489 kubelet[2836]: E0904 17:11:32.543656 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.544489 kubelet[2836]: W0904 17:11:32.544285 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-239&limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.544489 kubelet[2836]: E0904 17:11:32.544385 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-239&limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.545232 kubelet[2836]: I0904 17:11:32.545168 2836 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:11:32.548842 kubelet[2836]: W0904 17:11:32.548775 2836 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:11:32.550945 kubelet[2836]: I0904 17:11:32.550486 2836 server.go:1232] "Started kubelet" Sep 4 17:11:32.552993 kubelet[2836]: I0904 17:11:32.552928 2836 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:11:32.554713 kubelet[2836]: I0904 17:11:32.554298 2836 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:11:32.556903 kubelet[2836]: I0904 17:11:32.556843 2836 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:11:32.557833 kubelet[2836]: I0904 17:11:32.557781 2836 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:11:32.559527 kubelet[2836]: E0904 17:11:32.558623 2836 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-239.17f219b9b0196415", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-239", UID:"ip-172-31-30-239", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-239"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 11, 32, 550419477, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 11, 32, 550419477, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-239"}': 'Post "https://172.31.30.239:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.239:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:11:32.559527 kubelet[2836]: E0904 17:11:32.559040 2836 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:11:32.559527 kubelet[2836]: E0904 17:11:32.559090 2836 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:11:32.560111 kubelet[2836]: I0904 17:11:32.560006 2836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:11:32.571688 kubelet[2836]: E0904 17:11:32.570688 2836 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-239\" not found" Sep 4 17:11:32.571688 kubelet[2836]: I0904 17:11:32.570753 2836 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:11:32.571688 kubelet[2836]: I0904 17:11:32.571101 2836 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:11:32.571688 kubelet[2836]: I0904 17:11:32.571257 2836 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:11:32.573790 kubelet[2836]: W0904 17:11:32.572867 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.573790 kubelet[2836]: E0904 17:11:32.572986 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.575543 kubelet[2836]: E0904 17:11:32.575466 2836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-239?timeout=10s\": dial tcp 172.31.30.239:6443: connect: connection refused" interval="200ms" Sep 4 17:11:32.594650 kubelet[2836]: I0904 17:11:32.593464 2836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:11:32.595908 kubelet[2836]: I0904 17:11:32.595835 2836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:11:32.595908 kubelet[2836]: I0904 17:11:32.595893 2836 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:11:32.595908 kubelet[2836]: I0904 17:11:32.595929 2836 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:11:32.596258 kubelet[2836]: E0904 17:11:32.596006 2836 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:11:32.605237 kubelet[2836]: W0904 17:11:32.605135 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.607736 kubelet[2836]: E0904 17:11:32.607692 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:32.662632 kubelet[2836]: I0904 17:11:32.662340 2836 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:11:32.662632 kubelet[2836]: I0904 17:11:32.662395 2836 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:11:32.662632 kubelet[2836]: I0904 17:11:32.662458 2836 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:32.673660 kubelet[2836]: I0904 17:11:32.673356 2836 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-239" Sep 4 17:11:32.674008 kubelet[2836]: E0904 17:11:32.673930 2836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.239:6443/api/v1/nodes\": dial tcp 172.31.30.239:6443: connect: connection refused" node="ip-172-31-30-239" Sep 4 17:11:32.680794 kubelet[2836]: I0904 17:11:32.680680 2836 policy_none.go:49] "None policy: Start" Sep 4 17:11:32.682072 kubelet[2836]: I0904 17:11:32.682007 2836 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:11:32.682072 kubelet[2836]: I0904 17:11:32.682066 2836 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:11:32.697159 kubelet[2836]: E0904 17:11:32.697098 2836 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:11:32.697736 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:11:32.713456 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:11:32.721393 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:11:32.732552 kubelet[2836]: I0904 17:11:32.732429 2836 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:11:32.734648 kubelet[2836]: I0904 17:11:32.732888 2836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:11:32.736773 kubelet[2836]: E0904 17:11:32.736713 2836 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-239\" not found" Sep 4 17:11:32.777303 kubelet[2836]: E0904 17:11:32.777253 2836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-239?timeout=10s\": dial tcp 172.31.30.239:6443: connect: connection refused" interval="400ms" Sep 4 17:11:32.876880 kubelet[2836]: I0904 17:11:32.876822 2836 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-239" Sep 4 17:11:32.877593 kubelet[2836]: E0904 17:11:32.877539 2836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.239:6443/api/v1/nodes\": dial tcp 172.31.30.239:6443: connect: connection refused" node="ip-172-31-30-239" Sep 4 17:11:32.897963 kubelet[2836]: I0904 17:11:32.897629 2836 topology_manager.go:215] "Topology Admit Handler" podUID="abe412a11eaf885c4e86fce358da353e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-239" Sep 4 17:11:32.900249 kubelet[2836]: I0904 17:11:32.900204 2836 topology_manager.go:215] "Topology Admit Handler" podUID="3e34fc2130fcd219039b6fe23ef8dc8d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:32.903177 kubelet[2836]: I0904 17:11:32.902862 2836 topology_manager.go:215] "Topology Admit Handler" podUID="24ded70e06a03c14f7a8f79033709fb0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-239" Sep 4 17:11:32.919279 systemd[1]: Created slice kubepods-burstable-podabe412a11eaf885c4e86fce358da353e.slice - libcontainer container kubepods-burstable-podabe412a11eaf885c4e86fce358da353e.slice. Sep 4 17:11:32.950009 systemd[1]: Created slice kubepods-burstable-pod24ded70e06a03c14f7a8f79033709fb0.slice - libcontainer container kubepods-burstable-pod24ded70e06a03c14f7a8f79033709fb0.slice. Sep 4 17:11:32.968996 systemd[1]: Created slice kubepods-burstable-pod3e34fc2130fcd219039b6fe23ef8dc8d.slice - libcontainer container kubepods-burstable-pod3e34fc2130fcd219039b6fe23ef8dc8d.slice. Sep 4 17:11:32.973690 kubelet[2836]: I0904 17:11:32.973655 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abe412a11eaf885c4e86fce358da353e-ca-certs\") pod \"kube-apiserver-ip-172-31-30-239\" (UID: \"abe412a11eaf885c4e86fce358da353e\") " pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:32.974283 kubelet[2836]: I0904 17:11:32.974232 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:32.974370 kubelet[2836]: I0904 17:11:32.974332 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:32.974422 kubelet[2836]: I0904 17:11:32.974398 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:32.974472 kubelet[2836]: I0904 17:11:32.974453 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:32.974522 kubelet[2836]: I0904 17:11:32.974500 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24ded70e06a03c14f7a8f79033709fb0-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-239\" (UID: \"24ded70e06a03c14f7a8f79033709fb0\") " pod="kube-system/kube-scheduler-ip-172-31-30-239" Sep 4 17:11:32.974869 kubelet[2836]: I0904 17:11:32.974840 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abe412a11eaf885c4e86fce358da353e-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-239\" (UID: \"abe412a11eaf885c4e86fce358da353e\") " pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:32.974959 kubelet[2836]: I0904 17:11:32.974937 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abe412a11eaf885c4e86fce358da353e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-239\" (UID: \"abe412a11eaf885c4e86fce358da353e\") " pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:32.975011 kubelet[2836]: I0904 17:11:32.974988 2836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:33.178790 kubelet[2836]: E0904 17:11:33.178748 2836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-239?timeout=10s\": dial tcp 172.31.30.239:6443: connect: connection refused" interval="800ms" Sep 4 17:11:33.244987 containerd[2019]: time="2024-09-04T17:11:33.244923537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-239,Uid:abe412a11eaf885c4e86fce358da353e,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:33.265071 containerd[2019]: time="2024-09-04T17:11:33.264831873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-239,Uid:24ded70e06a03c14f7a8f79033709fb0,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:33.275780 containerd[2019]: time="2024-09-04T17:11:33.275713821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-239,Uid:3e34fc2130fcd219039b6fe23ef8dc8d,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:33.282147 kubelet[2836]: I0904 17:11:33.281616 2836 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-239" Sep 4 17:11:33.282147 kubelet[2836]: E0904 17:11:33.282076 2836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.239:6443/api/v1/nodes\": dial tcp 172.31.30.239:6443: connect: connection refused" node="ip-172-31-30-239" Sep 4 17:11:33.395286 kubelet[2836]: W0904 17:11:33.395210 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-239&limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.395463 kubelet[2836]: E0904 17:11:33.395444 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-239&limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.400793 kubelet[2836]: W0904 17:11:33.400711 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.400793 kubelet[2836]: E0904 17:11:33.400796 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.443975 kubelet[2836]: W0904 17:11:33.443664 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.443975 kubelet[2836]: E0904 17:11:33.443750 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.703190 kubelet[2836]: W0904 17:11:33.702987 2836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.703190 kubelet[2836]: E0904 17:11:33.703079 2836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:33.980062 kubelet[2836]: E0904 17:11:33.979852 2836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-239?timeout=10s\": dial tcp 172.31.30.239:6443: connect: connection refused" interval="1.6s" Sep 4 17:11:34.086983 kubelet[2836]: I0904 17:11:34.086918 2836 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-239" Sep 4 17:11:34.087641 kubelet[2836]: E0904 17:11:34.087413 2836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.239:6443/api/v1/nodes\": dial tcp 172.31.30.239:6443: connect: connection refused" node="ip-172-31-30-239" Sep 4 17:11:34.095719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378962932.mount: Deactivated successfully. Sep 4 17:11:34.105777 containerd[2019]: time="2024-09-04T17:11:34.105702165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:34.112263 containerd[2019]: time="2024-09-04T17:11:34.112189653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:11:34.114213 containerd[2019]: time="2024-09-04T17:11:34.113573373Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:34.116214 containerd[2019]: time="2024-09-04T17:11:34.116082489Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:34.118534 containerd[2019]: time="2024-09-04T17:11:34.118467729Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:34.120389 containerd[2019]: time="2024-09-04T17:11:34.120321645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:11:34.121396 containerd[2019]: time="2024-09-04T17:11:34.121296273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:11:34.125815 containerd[2019]: time="2024-09-04T17:11:34.125679321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:34.129306 containerd[2019]: time="2024-09-04T17:11:34.129018993Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 883.955104ms" Sep 4 17:11:34.132722 containerd[2019]: time="2024-09-04T17:11:34.132643917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 867.674908ms" Sep 4 17:11:34.140187 containerd[2019]: time="2024-09-04T17:11:34.139794441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 863.925124ms" Sep 4 17:11:34.457018 containerd[2019]: time="2024-09-04T17:11:34.456858539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:34.459617 containerd[2019]: time="2024-09-04T17:11:34.456973331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:34.459617 containerd[2019]: time="2024-09-04T17:11:34.457020275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:34.459617 containerd[2019]: time="2024-09-04T17:11:34.457045127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:34.461227 containerd[2019]: time="2024-09-04T17:11:34.459902039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:34.461227 containerd[2019]: time="2024-09-04T17:11:34.459994163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:34.461227 containerd[2019]: time="2024-09-04T17:11:34.460025651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:34.461227 containerd[2019]: time="2024-09-04T17:11:34.460049687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:34.461851 containerd[2019]: time="2024-09-04T17:11:34.461718899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:34.461973 containerd[2019]: time="2024-09-04T17:11:34.461834675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:34.461973 containerd[2019]: time="2024-09-04T17:11:34.461881451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:34.461973 containerd[2019]: time="2024-09-04T17:11:34.461915735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:34.510181 systemd[1]: Started cri-containerd-3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47.scope - libcontainer container 3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47. Sep 4 17:11:34.523064 systemd[1]: Started cri-containerd-77aa7b6bf176eea78a200eae1d8fbd4a514ef25e2163509009765673c88d2cee.scope - libcontainer container 77aa7b6bf176eea78a200eae1d8fbd4a514ef25e2163509009765673c88d2cee. Sep 4 17:11:34.533417 systemd[1]: Started cri-containerd-612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c.scope - libcontainer container 612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c. Sep 4 17:11:34.599883 kubelet[2836]: E0904 17:11:34.599818 2836 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.239:6443: connect: connection refused Sep 4 17:11:34.631868 containerd[2019]: time="2024-09-04T17:11:34.631502496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-239,Uid:3e34fc2130fcd219039b6fe23ef8dc8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47\"" Sep 4 17:11:34.650270 containerd[2019]: time="2024-09-04T17:11:34.650210148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-239,Uid:abe412a11eaf885c4e86fce358da353e,Namespace:kube-system,Attempt:0,} returns sandbox id \"77aa7b6bf176eea78a200eae1d8fbd4a514ef25e2163509009765673c88d2cee\"" Sep 4 17:11:34.653238 containerd[2019]: time="2024-09-04T17:11:34.652968972Z" level=info msg="CreateContainer within sandbox \"3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:11:34.659620 containerd[2019]: time="2024-09-04T17:11:34.659412372Z" level=info msg="CreateContainer within sandbox \"77aa7b6bf176eea78a200eae1d8fbd4a514ef25e2163509009765673c88d2cee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:11:34.671979 containerd[2019]: time="2024-09-04T17:11:34.671904480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-239,Uid:24ded70e06a03c14f7a8f79033709fb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c\"" Sep 4 17:11:34.682118 containerd[2019]: time="2024-09-04T17:11:34.682041960Z" level=info msg="CreateContainer within sandbox \"612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:11:34.699734 containerd[2019]: time="2024-09-04T17:11:34.699625416Z" level=info msg="CreateContainer within sandbox \"3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849\"" Sep 4 17:11:34.700550 containerd[2019]: time="2024-09-04T17:11:34.700501644Z" level=info msg="StartContainer for \"e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849\"" Sep 4 17:11:34.725860 containerd[2019]: time="2024-09-04T17:11:34.724664424Z" level=info msg="CreateContainer within sandbox \"77aa7b6bf176eea78a200eae1d8fbd4a514ef25e2163509009765673c88d2cee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e17a8a9a700422a79620078ebfdd38228f50ae5bb3c4b467c6a4dcfa849c741\"" Sep 4 17:11:34.728974 containerd[2019]: time="2024-09-04T17:11:34.728837604Z" level=info msg="StartContainer for \"1e17a8a9a700422a79620078ebfdd38228f50ae5bb3c4b467c6a4dcfa849c741\"" Sep 4 17:11:34.738027 containerd[2019]: time="2024-09-04T17:11:34.737783952Z" level=info msg="CreateContainer within sandbox \"612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210\"" Sep 4 17:11:34.740829 containerd[2019]: time="2024-09-04T17:11:34.739999776Z" level=info msg="StartContainer for \"9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210\"" Sep 4 17:11:34.759902 systemd[1]: Started cri-containerd-e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849.scope - libcontainer container e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849. Sep 4 17:11:34.811242 systemd[1]: Started cri-containerd-1e17a8a9a700422a79620078ebfdd38228f50ae5bb3c4b467c6a4dcfa849c741.scope - libcontainer container 1e17a8a9a700422a79620078ebfdd38228f50ae5bb3c4b467c6a4dcfa849c741. Sep 4 17:11:34.828323 systemd[1]: Started cri-containerd-9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210.scope - libcontainer container 9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210. Sep 4 17:11:34.906456 containerd[2019]: time="2024-09-04T17:11:34.906235165Z" level=info msg="StartContainer for \"e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849\" returns successfully" Sep 4 17:11:34.947445 containerd[2019]: time="2024-09-04T17:11:34.946984033Z" level=info msg="StartContainer for \"1e17a8a9a700422a79620078ebfdd38228f50ae5bb3c4b467c6a4dcfa849c741\" returns successfully" Sep 4 17:11:35.019980 containerd[2019]: time="2024-09-04T17:11:35.019644934Z" level=info msg="StartContainer for \"9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210\" returns successfully" Sep 4 17:11:35.690657 kubelet[2836]: I0904 17:11:35.690170 2836 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-239" Sep 4 17:11:38.835313 kubelet[2836]: E0904 17:11:38.835249 2836 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-239\" not found" node="ip-172-31-30-239" Sep 4 17:11:38.907654 kubelet[2836]: I0904 17:11:38.907570 2836 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-239" Sep 4 17:11:38.949834 kubelet[2836]: E0904 17:11:38.949684 2836 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-239.17f219b9b0196415", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-239", UID:"ip-172-31-30-239", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-239"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 11, 32, 550419477, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 11, 32, 550419477, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-239"}': 'namespaces "default" not found' (will not retry!) Sep 4 17:11:39.159760 kubelet[2836]: E0904 17:11:39.159692 2836 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-239\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:39.546799 kubelet[2836]: I0904 17:11:39.546643 2836 apiserver.go:52] "Watching apiserver" Sep 4 17:11:39.573611 kubelet[2836]: I0904 17:11:39.571422 2836 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:11:41.670839 systemd[1]: Reloading requested from client PID 3113 ('systemctl') (unit session-7.scope)... Sep 4 17:11:41.671300 systemd[1]: Reloading... Sep 4 17:11:41.808627 zram_generator::config[3151]: No configuration found. Sep 4 17:11:42.074596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:42.282657 systemd[1]: Reloading finished in 610 ms. Sep 4 17:11:42.360004 kubelet[2836]: I0904 17:11:42.359834 2836 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:42.360328 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:42.372933 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:11:42.373643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:42.373719 systemd[1]: kubelet.service: Consumed 2.289s CPU time, 115.9M memory peak, 0B memory swap peak. Sep 4 17:11:42.384190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:43.049944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:43.067499 (kubelet)[3211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:11:43.189245 kubelet[3211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:43.189719 kubelet[3211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:11:43.189719 kubelet[3211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:43.189719 kubelet[3211]: I0904 17:11:43.189436 3211 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:11:43.202451 kubelet[3211]: I0904 17:11:43.202379 3211 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:11:43.202451 kubelet[3211]: I0904 17:11:43.202443 3211 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:11:43.203195 kubelet[3211]: I0904 17:11:43.202828 3211 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:11:43.205982 kubelet[3211]: I0904 17:11:43.205930 3211 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:11:43.212088 kubelet[3211]: I0904 17:11:43.212027 3211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:43.227680 kubelet[3211]: W0904 17:11:43.227634 3211 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:11:43.229440 kubelet[3211]: I0904 17:11:43.229160 3211 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:11:43.229782 kubelet[3211]: I0904 17:11:43.229752 3211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230054 3211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230136 3211 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230158 3211 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230235 3211 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230404 3211 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230431 3211 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:11:43.230682 kubelet[3211]: I0904 17:11:43.230493 3211 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:11:43.231158 kubelet[3211]: I0904 17:11:43.230518 3211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:11:43.235809 kubelet[3211]: I0904 17:11:43.235501 3211 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:11:43.240794 kubelet[3211]: I0904 17:11:43.237981 3211 server.go:1232] "Started kubelet" Sep 4 17:11:43.244264 kubelet[3211]: I0904 17:11:43.244210 3211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:11:43.254667 kubelet[3211]: I0904 17:11:43.253784 3211 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:11:43.256287 kubelet[3211]: I0904 17:11:43.255705 3211 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:11:43.263293 kubelet[3211]: I0904 17:11:43.261541 3211 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:11:43.263293 kubelet[3211]: I0904 17:11:43.261935 3211 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:11:43.268824 kubelet[3211]: I0904 17:11:43.268758 3211 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:11:43.269916 kubelet[3211]: I0904 17:11:43.269880 3211 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:11:43.270286 kubelet[3211]: I0904 17:11:43.270266 3211 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:11:43.271222 kubelet[3211]: E0904 17:11:43.271187 3211 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:11:43.271414 kubelet[3211]: E0904 17:11:43.271393 3211 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:11:43.318920 kubelet[3211]: I0904 17:11:43.318796 3211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:11:43.326432 kubelet[3211]: I0904 17:11:43.326392 3211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:11:43.326693 kubelet[3211]: I0904 17:11:43.326671 3211 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:11:43.326808 kubelet[3211]: I0904 17:11:43.326789 3211 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:11:43.326985 kubelet[3211]: E0904 17:11:43.326966 3211 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:11:43.377419 kubelet[3211]: E0904 17:11:43.377329 3211 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Sep 4 17:11:43.389515 kubelet[3211]: I0904 17:11:43.389480 3211 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-239" Sep 4 17:11:43.414078 kubelet[3211]: I0904 17:11:43.414017 3211 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-30-239" Sep 4 17:11:43.414206 kubelet[3211]: I0904 17:11:43.414160 3211 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-239" Sep 4 17:11:43.427338 kubelet[3211]: E0904 17:11:43.427286 3211 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:11:43.515734 kubelet[3211]: I0904 17:11:43.515698 3211 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:11:43.516183 kubelet[3211]: I0904 17:11:43.516161 3211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:11:43.516701 kubelet[3211]: I0904 17:11:43.516279 3211 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:43.516701 kubelet[3211]: I0904 17:11:43.516526 3211 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:11:43.516701 kubelet[3211]: I0904 17:11:43.516564 3211 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:11:43.516701 kubelet[3211]: I0904 17:11:43.516618 3211 policy_none.go:49] "None policy: Start" Sep 4 17:11:43.519315 kubelet[3211]: I0904 17:11:43.519243 3211 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:11:43.519432 kubelet[3211]: I0904 17:11:43.519332 3211 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:11:43.520918 kubelet[3211]: I0904 17:11:43.520828 3211 state_mem.go:75] "Updated machine memory state" Sep 4 17:11:43.538182 kubelet[3211]: I0904 17:11:43.536687 3211 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:11:43.538182 kubelet[3211]: I0904 17:11:43.537887 3211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:11:43.628930 kubelet[3211]: I0904 17:11:43.628240 3211 topology_manager.go:215] "Topology Admit Handler" podUID="abe412a11eaf885c4e86fce358da353e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-239" Sep 4 17:11:43.628930 kubelet[3211]: I0904 17:11:43.628403 3211 topology_manager.go:215] "Topology Admit Handler" podUID="3e34fc2130fcd219039b6fe23ef8dc8d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:43.628930 kubelet[3211]: I0904 17:11:43.628481 3211 topology_manager.go:215] "Topology Admit Handler" podUID="24ded70e06a03c14f7a8f79033709fb0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-239" Sep 4 17:11:43.643257 kubelet[3211]: E0904 17:11:43.643194 3211 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-239\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:43.675054 kubelet[3211]: I0904 17:11:43.673972 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abe412a11eaf885c4e86fce358da353e-ca-certs\") pod \"kube-apiserver-ip-172-31-30-239\" (UID: \"abe412a11eaf885c4e86fce358da353e\") " pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:43.675213 kubelet[3211]: I0904 17:11:43.675096 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abe412a11eaf885c4e86fce358da353e-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-239\" (UID: \"abe412a11eaf885c4e86fce358da353e\") " pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:43.675213 kubelet[3211]: I0904 17:11:43.675154 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abe412a11eaf885c4e86fce358da353e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-239\" (UID: \"abe412a11eaf885c4e86fce358da353e\") " pod="kube-system/kube-apiserver-ip-172-31-30-239" Sep 4 17:11:43.675213 kubelet[3211]: I0904 17:11:43.675203 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:43.675393 kubelet[3211]: I0904 17:11:43.675247 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:43.675393 kubelet[3211]: I0904 17:11:43.675292 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:43.675393 kubelet[3211]: I0904 17:11:43.675337 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:43.675393 kubelet[3211]: I0904 17:11:43.675389 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e34fc2130fcd219039b6fe23ef8dc8d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-239\" (UID: \"3e34fc2130fcd219039b6fe23ef8dc8d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:43.675611 kubelet[3211]: I0904 17:11:43.675432 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24ded70e06a03c14f7a8f79033709fb0-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-239\" (UID: \"24ded70e06a03c14f7a8f79033709fb0\") " pod="kube-system/kube-scheduler-ip-172-31-30-239" Sep 4 17:11:44.232463 kubelet[3211]: I0904 17:11:44.232395 3211 apiserver.go:52] "Watching apiserver" Sep 4 17:11:44.270383 kubelet[3211]: I0904 17:11:44.270207 3211 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:11:44.435286 kubelet[3211]: E0904 17:11:44.434963 3211 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-239\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-239" Sep 4 17:11:44.481622 kubelet[3211]: E0904 17:11:44.479512 3211 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-239\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-239" Sep 4 17:11:44.551348 kubelet[3211]: I0904 17:11:44.551191 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-239" podStartSLOduration=1.551099841 podCreationTimestamp="2024-09-04 17:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:44.523604817 +0000 UTC m=+1.444033280" watchObservedRunningTime="2024-09-04 17:11:44.551099841 +0000 UTC m=+1.471528316" Sep 4 17:11:44.553539 kubelet[3211]: I0904 17:11:44.553398 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-239" podStartSLOduration=1.5533045890000001 podCreationTimestamp="2024-09-04 17:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:44.550347885 +0000 UTC m=+1.470776456" watchObservedRunningTime="2024-09-04 17:11:44.553304589 +0000 UTC m=+1.473733076" Sep 4 17:11:46.180325 update_engine[1997]: I0904 17:11:46.178787 1997 update_attempter.cc:509] Updating boot flags... Sep 4 17:11:46.436361 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3264) Sep 4 17:11:46.860836 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3266) Sep 4 17:11:47.446801 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3266) Sep 4 17:11:48.876765 kubelet[3211]: I0904 17:11:48.876697 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-239" podStartSLOduration=9.876613971 podCreationTimestamp="2024-09-04 17:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:44.588885837 +0000 UTC m=+1.509314336" watchObservedRunningTime="2024-09-04 17:11:48.876613971 +0000 UTC m=+5.797042458" Sep 4 17:11:50.934266 sudo[2324]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:50.958637 sshd[2321]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:50.964070 systemd[1]: sshd@6-172.31.30.239:22-139.178.89.65:49442.service: Deactivated successfully. Sep 4 17:11:50.969937 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:11:50.971740 systemd[1]: session-7.scope: Consumed 9.462s CPU time, 131.6M memory peak, 0B memory swap peak. Sep 4 17:11:50.973910 systemd-logind[1996]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:11:50.976687 systemd-logind[1996]: Removed session 7. Sep 4 17:11:55.382651 kubelet[3211]: I0904 17:11:55.382130 3211 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:11:55.384851 kubelet[3211]: I0904 17:11:55.384544 3211 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:11:55.385107 containerd[2019]: time="2024-09-04T17:11:55.384204511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:11:56.078787 kubelet[3211]: I0904 17:11:56.077852 3211 topology_manager.go:215] "Topology Admit Handler" podUID="5126c5d3-c4c3-4b91-8759-1dd08c42b0dd" podNamespace="kube-system" podName="kube-proxy-74k5l" Sep 4 17:11:56.098374 systemd[1]: Created slice kubepods-besteffort-pod5126c5d3_c4c3_4b91_8759_1dd08c42b0dd.slice - libcontainer container kubepods-besteffort-pod5126c5d3_c4c3_4b91_8759_1dd08c42b0dd.slice. Sep 4 17:11:56.174458 kubelet[3211]: I0904 17:11:56.174350 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5126c5d3-c4c3-4b91-8759-1dd08c42b0dd-lib-modules\") pod \"kube-proxy-74k5l\" (UID: \"5126c5d3-c4c3-4b91-8759-1dd08c42b0dd\") " pod="kube-system/kube-proxy-74k5l" Sep 4 17:11:56.174458 kubelet[3211]: I0904 17:11:56.174447 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtx8j\" (UniqueName: \"kubernetes.io/projected/5126c5d3-c4c3-4b91-8759-1dd08c42b0dd-kube-api-access-vtx8j\") pod \"kube-proxy-74k5l\" (UID: \"5126c5d3-c4c3-4b91-8759-1dd08c42b0dd\") " pod="kube-system/kube-proxy-74k5l" Sep 4 17:11:56.174786 kubelet[3211]: I0904 17:11:56.174500 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5126c5d3-c4c3-4b91-8759-1dd08c42b0dd-xtables-lock\") pod \"kube-proxy-74k5l\" (UID: \"5126c5d3-c4c3-4b91-8759-1dd08c42b0dd\") " pod="kube-system/kube-proxy-74k5l" Sep 4 17:11:56.174786 kubelet[3211]: I0904 17:11:56.174550 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5126c5d3-c4c3-4b91-8759-1dd08c42b0dd-kube-proxy\") pod \"kube-proxy-74k5l\" (UID: \"5126c5d3-c4c3-4b91-8759-1dd08c42b0dd\") " pod="kube-system/kube-proxy-74k5l" Sep 4 17:11:56.411572 containerd[2019]: time="2024-09-04T17:11:56.411491372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74k5l,Uid:5126c5d3-c4c3-4b91-8759-1dd08c42b0dd,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:56.429340 kubelet[3211]: I0904 17:11:56.428475 3211 topology_manager.go:215] "Topology Admit Handler" podUID="5c122eee-f7be-42e8-81fb-b584d9853510" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-bwjjk" Sep 4 17:11:56.468753 systemd[1]: Created slice kubepods-besteffort-pod5c122eee_f7be_42e8_81fb_b584d9853510.slice - libcontainer container kubepods-besteffort-pod5c122eee_f7be_42e8_81fb_b584d9853510.slice. Sep 4 17:11:56.477133 kubelet[3211]: I0904 17:11:56.476793 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7zj\" (UniqueName: \"kubernetes.io/projected/5c122eee-f7be-42e8-81fb-b584d9853510-kube-api-access-jw7zj\") pod \"tigera-operator-5d56685c77-bwjjk\" (UID: \"5c122eee-f7be-42e8-81fb-b584d9853510\") " pod="tigera-operator/tigera-operator-5d56685c77-bwjjk" Sep 4 17:11:56.477133 kubelet[3211]: I0904 17:11:56.476863 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c122eee-f7be-42e8-81fb-b584d9853510-var-lib-calico\") pod \"tigera-operator-5d56685c77-bwjjk\" (UID: \"5c122eee-f7be-42e8-81fb-b584d9853510\") " pod="tigera-operator/tigera-operator-5d56685c77-bwjjk" Sep 4 17:11:56.480650 containerd[2019]: time="2024-09-04T17:11:56.478378400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:56.480650 containerd[2019]: time="2024-09-04T17:11:56.478495616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:56.480650 containerd[2019]: time="2024-09-04T17:11:56.478538228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:56.480650 containerd[2019]: time="2024-09-04T17:11:56.478571624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:56.519168 systemd[1]: run-containerd-runc-k8s.io-77c8bbd2965d38f4e269189499746f87629c01a9ffa345adb8e98af522925a7d-runc.6XCRa1.mount: Deactivated successfully. Sep 4 17:11:56.533975 systemd[1]: Started cri-containerd-77c8bbd2965d38f4e269189499746f87629c01a9ffa345adb8e98af522925a7d.scope - libcontainer container 77c8bbd2965d38f4e269189499746f87629c01a9ffa345adb8e98af522925a7d. Sep 4 17:11:56.576386 containerd[2019]: time="2024-09-04T17:11:56.576043893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74k5l,Uid:5126c5d3-c4c3-4b91-8759-1dd08c42b0dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c8bbd2965d38f4e269189499746f87629c01a9ffa345adb8e98af522925a7d\"" Sep 4 17:11:56.587847 containerd[2019]: time="2024-09-04T17:11:56.587560845Z" level=info msg="CreateContainer within sandbox \"77c8bbd2965d38f4e269189499746f87629c01a9ffa345adb8e98af522925a7d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:11:56.648364 containerd[2019]: time="2024-09-04T17:11:56.648303201Z" level=info msg="CreateContainer within sandbox \"77c8bbd2965d38f4e269189499746f87629c01a9ffa345adb8e98af522925a7d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44f5fc6f06cb697797bd211e1d9734089f99a89be74ff6bdb6cfa126aacc3c1e\"" Sep 4 17:11:56.650710 containerd[2019]: time="2024-09-04T17:11:56.649290933Z" level=info msg="StartContainer for \"44f5fc6f06cb697797bd211e1d9734089f99a89be74ff6bdb6cfa126aacc3c1e\"" Sep 4 17:11:56.699958 systemd[1]: Started cri-containerd-44f5fc6f06cb697797bd211e1d9734089f99a89be74ff6bdb6cfa126aacc3c1e.scope - libcontainer container 44f5fc6f06cb697797bd211e1d9734089f99a89be74ff6bdb6cfa126aacc3c1e. Sep 4 17:11:56.756399 containerd[2019]: time="2024-09-04T17:11:56.756259882Z" level=info msg="StartContainer for \"44f5fc6f06cb697797bd211e1d9734089f99a89be74ff6bdb6cfa126aacc3c1e\" returns successfully" Sep 4 17:11:56.782912 containerd[2019]: time="2024-09-04T17:11:56.782780770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-bwjjk,Uid:5c122eee-f7be-42e8-81fb-b584d9853510,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:11:56.837690 containerd[2019]: time="2024-09-04T17:11:56.835550554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:56.837690 containerd[2019]: time="2024-09-04T17:11:56.835707970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:56.837690 containerd[2019]: time="2024-09-04T17:11:56.835758742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:56.837690 containerd[2019]: time="2024-09-04T17:11:56.835792282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:56.880122 systemd[1]: Started cri-containerd-7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a.scope - libcontainer container 7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a. Sep 4 17:11:56.966261 containerd[2019]: time="2024-09-04T17:11:56.966027455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-bwjjk,Uid:5c122eee-f7be-42e8-81fb-b584d9853510,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a\"" Sep 4 17:11:56.971635 containerd[2019]: time="2024-09-04T17:11:56.970837355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:11:58.192240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581595191.mount: Deactivated successfully. Sep 4 17:11:58.975128 containerd[2019]: time="2024-09-04T17:11:58.975045745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:58.977216 containerd[2019]: time="2024-09-04T17:11:58.977120005Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485915" Sep 4 17:11:58.979419 containerd[2019]: time="2024-09-04T17:11:58.979068205Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:58.985423 containerd[2019]: time="2024-09-04T17:11:58.985321837Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:58.987144 containerd[2019]: time="2024-09-04T17:11:58.986914957Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.016014494s" Sep 4 17:11:58.987144 containerd[2019]: time="2024-09-04T17:11:58.986986501Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:11:58.991206 containerd[2019]: time="2024-09-04T17:11:58.991069249Z" level=info msg="CreateContainer within sandbox \"7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:11:59.021931 containerd[2019]: time="2024-09-04T17:11:59.021844617Z" level=info msg="CreateContainer within sandbox \"7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81\"" Sep 4 17:11:59.023513 containerd[2019]: time="2024-09-04T17:11:59.023158341Z" level=info msg="StartContainer for \"54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81\"" Sep 4 17:11:59.075515 systemd[1]: run-containerd-runc-k8s.io-54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81-runc.GhcHIL.mount: Deactivated successfully. Sep 4 17:11:59.089922 systemd[1]: Started cri-containerd-54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81.scope - libcontainer container 54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81. Sep 4 17:11:59.138129 containerd[2019]: time="2024-09-04T17:11:59.138007029Z" level=info msg="StartContainer for \"54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81\" returns successfully" Sep 4 17:11:59.490465 kubelet[3211]: I0904 17:11:59.490256 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-74k5l" podStartSLOduration=3.490121459 podCreationTimestamp="2024-09-04 17:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:57.483479229 +0000 UTC m=+14.403907716" watchObservedRunningTime="2024-09-04 17:11:59.490121459 +0000 UTC m=+16.410549970" Sep 4 17:11:59.491786 kubelet[3211]: I0904 17:11:59.491281 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-bwjjk" podStartSLOduration=1.472012437 podCreationTimestamp="2024-09-04 17:11:56 +0000 UTC" firstStartedPulling="2024-09-04 17:11:56.968491595 +0000 UTC m=+13.888920070" lastFinishedPulling="2024-09-04 17:11:58.987676429 +0000 UTC m=+15.908104916" observedRunningTime="2024-09-04 17:11:59.489865799 +0000 UTC m=+16.410294286" watchObservedRunningTime="2024-09-04 17:11:59.491197283 +0000 UTC m=+16.411625770" Sep 4 17:12:04.427159 kubelet[3211]: I0904 17:12:04.427087 3211 topology_manager.go:215] "Topology Admit Handler" podUID="9a6eb710-23cf-499d-b363-850dbfdefba7" podNamespace="calico-system" podName="calico-typha-d849bdccd-57vj8" Sep 4 17:12:04.456542 systemd[1]: Created slice kubepods-besteffort-pod9a6eb710_23cf_499d_b363_850dbfdefba7.slice - libcontainer container kubepods-besteffort-pod9a6eb710_23cf_499d_b363_850dbfdefba7.slice. Sep 4 17:12:04.530605 kubelet[3211]: I0904 17:12:04.530404 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb65j\" (UniqueName: \"kubernetes.io/projected/9a6eb710-23cf-499d-b363-850dbfdefba7-kube-api-access-hb65j\") pod \"calico-typha-d849bdccd-57vj8\" (UID: \"9a6eb710-23cf-499d-b363-850dbfdefba7\") " pod="calico-system/calico-typha-d849bdccd-57vj8" Sep 4 17:12:04.530605 kubelet[3211]: I0904 17:12:04.530478 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a6eb710-23cf-499d-b363-850dbfdefba7-typha-certs\") pod \"calico-typha-d849bdccd-57vj8\" (UID: \"9a6eb710-23cf-499d-b363-850dbfdefba7\") " pod="calico-system/calico-typha-d849bdccd-57vj8" Sep 4 17:12:04.530873 kubelet[3211]: I0904 17:12:04.530644 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6eb710-23cf-499d-b363-850dbfdefba7-tigera-ca-bundle\") pod \"calico-typha-d849bdccd-57vj8\" (UID: \"9a6eb710-23cf-499d-b363-850dbfdefba7\") " pod="calico-system/calico-typha-d849bdccd-57vj8" Sep 4 17:12:04.602349 kubelet[3211]: I0904 17:12:04.600180 3211 topology_manager.go:215] "Topology Admit Handler" podUID="e4496d18-fb8b-489e-ba6d-158f761ee10b" podNamespace="calico-system" podName="calico-node-v9jnd" Sep 4 17:12:04.619820 systemd[1]: Created slice kubepods-besteffort-pode4496d18_fb8b_489e_ba6d_158f761ee10b.slice - libcontainer container kubepods-besteffort-pode4496d18_fb8b_489e_ba6d_158f761ee10b.slice. Sep 4 17:12:04.631971 kubelet[3211]: I0904 17:12:04.631913 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-cni-bin-dir\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632137 kubelet[3211]: I0904 17:12:04.631996 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-flexvol-driver-host\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632137 kubelet[3211]: I0904 17:12:04.632047 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn7m\" (UniqueName: \"kubernetes.io/projected/e4496d18-fb8b-489e-ba6d-158f761ee10b-kube-api-access-4xn7m\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632137 kubelet[3211]: I0904 17:12:04.632125 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-var-lib-calico\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632295 kubelet[3211]: I0904 17:12:04.632170 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-lib-modules\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632295 kubelet[3211]: I0904 17:12:04.632214 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-policysync\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632295 kubelet[3211]: I0904 17:12:04.632263 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4496d18-fb8b-489e-ba6d-158f761ee10b-node-certs\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632460 kubelet[3211]: I0904 17:12:04.632306 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-var-run-calico\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632460 kubelet[3211]: I0904 17:12:04.632353 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-cni-log-dir\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632460 kubelet[3211]: I0904 17:12:04.632419 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-xtables-lock\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632658 kubelet[3211]: I0904 17:12:04.632470 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4496d18-fb8b-489e-ba6d-158f761ee10b-tigera-ca-bundle\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.632658 kubelet[3211]: I0904 17:12:04.632514 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4496d18-fb8b-489e-ba6d-158f761ee10b-cni-net-dir\") pod \"calico-node-v9jnd\" (UID: \"e4496d18-fb8b-489e-ba6d-158f761ee10b\") " pod="calico-system/calico-node-v9jnd" Sep 4 17:12:04.720077 kubelet[3211]: I0904 17:12:04.718449 3211 topology_manager.go:215] "Topology Admit Handler" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" podNamespace="calico-system" podName="csi-node-driver-sdjd2" Sep 4 17:12:04.720077 kubelet[3211]: E0904 17:12:04.718923 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:04.736985 kubelet[3211]: E0904 17:12:04.736947 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.737202 kubelet[3211]: W0904 17:12:04.737171 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.737332 kubelet[3211]: E0904 17:12:04.737311 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.739200 kubelet[3211]: E0904 17:12:04.739150 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.739483 kubelet[3211]: W0904 17:12:04.739453 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.739700 kubelet[3211]: E0904 17:12:04.739661 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.743597 kubelet[3211]: E0904 17:12:04.743447 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.743597 kubelet[3211]: W0904 17:12:04.743477 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.743597 kubelet[3211]: E0904 17:12:04.743526 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.745610 kubelet[3211]: E0904 17:12:04.744925 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.745610 kubelet[3211]: W0904 17:12:04.745098 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.745610 kubelet[3211]: E0904 17:12:04.745142 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.747506 kubelet[3211]: E0904 17:12:04.747455 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.747711 kubelet[3211]: W0904 17:12:04.747618 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.749631 kubelet[3211]: E0904 17:12:04.747664 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.750927 kubelet[3211]: E0904 17:12:04.750892 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.751108 kubelet[3211]: W0904 17:12:04.751082 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.751229 kubelet[3211]: E0904 17:12:04.751209 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.774521 containerd[2019]: time="2024-09-04T17:12:04.774460457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d849bdccd-57vj8,Uid:9a6eb710-23cf-499d-b363-850dbfdefba7,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:04.813556 kubelet[3211]: E0904 17:12:04.807899 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.813556 kubelet[3211]: W0904 17:12:04.807931 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.813556 kubelet[3211]: E0904 17:12:04.807970 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.819962 kubelet[3211]: E0904 17:12:04.819911 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.819962 kubelet[3211]: W0904 17:12:04.819950 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.820209 kubelet[3211]: E0904 17:12:04.819991 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.824045 kubelet[3211]: E0904 17:12:04.823993 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.824045 kubelet[3211]: W0904 17:12:04.824032 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.824369 kubelet[3211]: E0904 17:12:04.824073 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.824777 kubelet[3211]: E0904 17:12:04.824701 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.824777 kubelet[3211]: W0904 17:12:04.824725 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.824777 kubelet[3211]: E0904 17:12:04.824757 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.826044 kubelet[3211]: E0904 17:12:04.825860 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.826044 kubelet[3211]: W0904 17:12:04.825899 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.826044 kubelet[3211]: E0904 17:12:04.825939 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.828631 kubelet[3211]: E0904 17:12:04.827737 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.828631 kubelet[3211]: W0904 17:12:04.827778 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.828631 kubelet[3211]: E0904 17:12:04.827818 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.830318 kubelet[3211]: E0904 17:12:04.829816 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.830318 kubelet[3211]: W0904 17:12:04.829856 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.830318 kubelet[3211]: E0904 17:12:04.829895 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.830318 kubelet[3211]: E0904 17:12:04.830314 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.830318 kubelet[3211]: W0904 17:12:04.830335 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.831718 kubelet[3211]: E0904 17:12:04.830365 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.831718 kubelet[3211]: E0904 17:12:04.831436 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.831718 kubelet[3211]: W0904 17:12:04.831465 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.831718 kubelet[3211]: E0904 17:12:04.831502 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.835074 kubelet[3211]: E0904 17:12:04.833320 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.835074 kubelet[3211]: W0904 17:12:04.833358 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.835074 kubelet[3211]: E0904 17:12:04.833396 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.835074 kubelet[3211]: E0904 17:12:04.833908 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.835074 kubelet[3211]: W0904 17:12:04.833931 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.835074 kubelet[3211]: E0904 17:12:04.833960 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.836898 kubelet[3211]: E0904 17:12:04.835961 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.836898 kubelet[3211]: W0904 17:12:04.835992 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.836898 kubelet[3211]: E0904 17:12:04.836041 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.836898 kubelet[3211]: E0904 17:12:04.836369 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.836898 kubelet[3211]: W0904 17:12:04.836387 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.836898 kubelet[3211]: E0904 17:12:04.836414 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.838307 kubelet[3211]: E0904 17:12:04.837001 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.838307 kubelet[3211]: W0904 17:12:04.837082 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.838307 kubelet[3211]: E0904 17:12:04.837118 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.839166 kubelet[3211]: E0904 17:12:04.838711 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.839166 kubelet[3211]: W0904 17:12:04.838748 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.839166 kubelet[3211]: E0904 17:12:04.838786 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.840902 kubelet[3211]: E0904 17:12:04.840158 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.840902 kubelet[3211]: W0904 17:12:04.840192 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.840902 kubelet[3211]: E0904 17:12:04.840224 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.843462 kubelet[3211]: E0904 17:12:04.842540 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.843462 kubelet[3211]: W0904 17:12:04.842595 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.843462 kubelet[3211]: E0904 17:12:04.842635 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.845308 kubelet[3211]: E0904 17:12:04.845074 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.845308 kubelet[3211]: W0904 17:12:04.845110 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.845308 kubelet[3211]: E0904 17:12:04.845146 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.846804 kubelet[3211]: E0904 17:12:04.846710 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.846804 kubelet[3211]: W0904 17:12:04.846747 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.846804 kubelet[3211]: E0904 17:12:04.846785 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.848102 kubelet[3211]: E0904 17:12:04.847878 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.848102 kubelet[3211]: W0904 17:12:04.847912 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.848102 kubelet[3211]: E0904 17:12:04.847951 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.849779 kubelet[3211]: E0904 17:12:04.849092 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.849779 kubelet[3211]: W0904 17:12:04.849128 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.849779 kubelet[3211]: E0904 17:12:04.849169 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.853020 kubelet[3211]: E0904 17:12:04.852966 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.853020 kubelet[3211]: W0904 17:12:04.853006 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.853020 kubelet[3211]: E0904 17:12:04.853046 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.854891 kubelet[3211]: I0904 17:12:04.853109 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c1c414c-401b-4001-b4a7-8eb90c8e06f7-varrun\") pod \"csi-node-driver-sdjd2\" (UID: \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\") " pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:04.855048 kubelet[3211]: E0904 17:12:04.854988 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.855048 kubelet[3211]: W0904 17:12:04.855018 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.855177 kubelet[3211]: E0904 17:12:04.855069 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.855177 kubelet[3211]: I0904 17:12:04.855136 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c1c414c-401b-4001-b4a7-8eb90c8e06f7-registration-dir\") pod \"csi-node-driver-sdjd2\" (UID: \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\") " pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:04.858349 kubelet[3211]: E0904 17:12:04.857657 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.858349 kubelet[3211]: W0904 17:12:04.857702 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.858349 kubelet[3211]: E0904 17:12:04.858100 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.858349 kubelet[3211]: I0904 17:12:04.858162 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv4nm\" (UniqueName: \"kubernetes.io/projected/4c1c414c-401b-4001-b4a7-8eb90c8e06f7-kube-api-access-vv4nm\") pod \"csi-node-driver-sdjd2\" (UID: \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\") " pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:04.859821 kubelet[3211]: E0904 17:12:04.859194 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.859821 kubelet[3211]: W0904 17:12:04.859236 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.860033 kubelet[3211]: E0904 17:12:04.860007 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.860087 kubelet[3211]: W0904 17:12:04.860030 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.861824 kubelet[3211]: E0904 17:12:04.861672 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.861824 kubelet[3211]: E0904 17:12:04.861763 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.862985 kubelet[3211]: E0904 17:12:04.862734 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.862985 kubelet[3211]: W0904 17:12:04.862772 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.862985 kubelet[3211]: E0904 17:12:04.862852 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.862985 kubelet[3211]: I0904 17:12:04.862935 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c1c414c-401b-4001-b4a7-8eb90c8e06f7-kubelet-dir\") pod \"csi-node-driver-sdjd2\" (UID: \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\") " pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:04.864727 kubelet[3211]: E0904 17:12:04.864193 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.864727 kubelet[3211]: W0904 17:12:04.864233 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.865406 kubelet[3211]: E0904 17:12:04.865138 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.866880 kubelet[3211]: E0904 17:12:04.865765 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.866880 kubelet[3211]: W0904 17:12:04.865794 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.866880 kubelet[3211]: E0904 17:12:04.865832 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.870774 kubelet[3211]: E0904 17:12:04.870708 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.870774 kubelet[3211]: W0904 17:12:04.870751 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.870774 kubelet[3211]: E0904 17:12:04.870803 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.870774 kubelet[3211]: I0904 17:12:04.870858 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c1c414c-401b-4001-b4a7-8eb90c8e06f7-socket-dir\") pod \"csi-node-driver-sdjd2\" (UID: \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\") " pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:04.873426 kubelet[3211]: E0904 17:12:04.872802 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.873426 kubelet[3211]: W0904 17:12:04.872839 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.873426 kubelet[3211]: E0904 17:12:04.873175 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.875606 kubelet[3211]: E0904 17:12:04.875056 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.875606 kubelet[3211]: W0904 17:12:04.875121 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.875606 kubelet[3211]: E0904 17:12:04.875161 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.877563 kubelet[3211]: E0904 17:12:04.876988 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.877563 kubelet[3211]: W0904 17:12:04.877027 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.877563 kubelet[3211]: E0904 17:12:04.877068 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.880041 kubelet[3211]: E0904 17:12:04.879045 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.880041 kubelet[3211]: W0904 17:12:04.879081 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.880041 kubelet[3211]: E0904 17:12:04.879123 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.881320 kubelet[3211]: E0904 17:12:04.881281 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.881673 kubelet[3211]: W0904 17:12:04.881484 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.881673 kubelet[3211]: E0904 17:12:04.881529 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.883368 kubelet[3211]: E0904 17:12:04.883224 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:04.883368 kubelet[3211]: W0904 17:12:04.883259 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:04.883368 kubelet[3211]: E0904 17:12:04.883315 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:04.888976 containerd[2019]: time="2024-09-04T17:12:04.887666250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:04.893800 containerd[2019]: time="2024-09-04T17:12:04.888981366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:04.893800 containerd[2019]: time="2024-09-04T17:12:04.889072458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:04.893800 containerd[2019]: time="2024-09-04T17:12:04.889100490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:04.932152 containerd[2019]: time="2024-09-04T17:12:04.931838406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9jnd,Uid:e4496d18-fb8b-489e-ba6d-158f761ee10b,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:04.962402 systemd[1]: Started cri-containerd-740aed415ce2629a47a96a612949f979947eef1535d5d282efd752fc2e4eb05c.scope - libcontainer container 740aed415ce2629a47a96a612949f979947eef1535d5d282efd752fc2e4eb05c. Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.980266 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.055968 kubelet[3211]: W0904 17:12:04.980295 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.980358 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.981012 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.055968 kubelet[3211]: W0904 17:12:04.981036 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.981677 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.982946 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.055968 kubelet[3211]: W0904 17:12:04.982974 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.983027 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.055968 kubelet[3211]: E0904 17:12:04.983415 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.056479 kubelet[3211]: W0904 17:12:04.983438 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.056479 kubelet[3211]: E0904 17:12:04.983544 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.056479 kubelet[3211]: E0904 17:12:04.983913 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.056479 kubelet[3211]: W0904 17:12:04.983933 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.056479 kubelet[3211]: E0904 17:12:04.984035 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.056479 kubelet[3211]: E0904 17:12:04.984276 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.056479 kubelet[3211]: W0904 17:12:04.984293 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.056479 kubelet[3211]: E0904 17:12:04.984404 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.056479 kubelet[3211]: E0904 17:12:04.984728 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.056479 kubelet[3211]: W0904 17:12:04.984746 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.984793 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.985259 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.057036 kubelet[3211]: W0904 17:12:04.985282 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.985328 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.986699 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.057036 kubelet[3211]: W0904 17:12:04.986729 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.986806 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.987176 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.057036 kubelet[3211]: W0904 17:12:04.987195 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.057036 kubelet[3211]: E0904 17:12:04.987404 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.987564 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.062954 kubelet[3211]: W0904 17:12:04.987598 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.987822 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.988102 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.062954 kubelet[3211]: W0904 17:12:04.988126 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.988211 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.989432 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.062954 kubelet[3211]: W0904 17:12:04.989458 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.989533 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.062954 kubelet[3211]: E0904 17:12:04.989963 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.070870 kubelet[3211]: W0904 17:12:04.989983 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.070870 kubelet[3211]: E0904 17:12:04.990049 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.070870 kubelet[3211]: E0904 17:12:04.990395 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.070870 kubelet[3211]: W0904 17:12:04.990418 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.070870 kubelet[3211]: E0904 17:12:04.990487 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.070870 kubelet[3211]: E0904 17:12:04.994121 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.070870 kubelet[3211]: W0904 17:12:04.994150 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.070870 kubelet[3211]: E0904 17:12:04.994226 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.070870 kubelet[3211]: E0904 17:12:04.995905 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.070870 kubelet[3211]: W0904 17:12:04.995960 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.996429 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.071440 kubelet[3211]: W0904 17:12:04.996482 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.996897 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.071440 kubelet[3211]: W0904 17:12:04.996915 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.996948 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.997351 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.071440 kubelet[3211]: W0904 17:12:04.997371 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.997398 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.997449 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.071440 kubelet[3211]: E0904 17:12:04.997822 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:04.998696 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.075474 kubelet[3211]: W0904 17:12:04.998729 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:04.998807 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:05.000817 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.075474 kubelet[3211]: W0904 17:12:05.000848 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:05.000963 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:05.001924 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.075474 kubelet[3211]: W0904 17:12:05.001947 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:05.001980 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.075474 kubelet[3211]: E0904 17:12:05.002376 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.078628 kubelet[3211]: W0904 17:12:05.002395 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.078628 kubelet[3211]: E0904 17:12:05.002421 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.078628 kubelet[3211]: E0904 17:12:05.063617 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.078628 kubelet[3211]: W0904 17:12:05.064989 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.078628 kubelet[3211]: E0904 17:12:05.065290 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.084970 kubelet[3211]: E0904 17:12:05.084776 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:05.084970 kubelet[3211]: W0904 17:12:05.084838 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:05.084970 kubelet[3211]: E0904 17:12:05.084877 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:05.133635 containerd[2019]: time="2024-09-04T17:12:05.132997779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:05.133635 containerd[2019]: time="2024-09-04T17:12:05.133133295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:05.133635 containerd[2019]: time="2024-09-04T17:12:05.133199727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:05.133635 containerd[2019]: time="2024-09-04T17:12:05.133235379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:05.186976 systemd[1]: Started cri-containerd-34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70.scope - libcontainer container 34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70. Sep 4 17:12:05.205847 containerd[2019]: time="2024-09-04T17:12:05.205148920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d849bdccd-57vj8,Uid:9a6eb710-23cf-499d-b363-850dbfdefba7,Namespace:calico-system,Attempt:0,} returns sandbox id \"740aed415ce2629a47a96a612949f979947eef1535d5d282efd752fc2e4eb05c\"" Sep 4 17:12:05.211454 containerd[2019]: time="2024-09-04T17:12:05.209297752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:12:05.270109 containerd[2019]: time="2024-09-04T17:12:05.270029260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9jnd,Uid:e4496d18-fb8b-489e-ba6d-158f761ee10b,Namespace:calico-system,Attempt:0,} returns sandbox id \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\"" Sep 4 17:12:06.328314 kubelet[3211]: E0904 17:12:06.327617 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:08.093209 containerd[2019]: time="2024-09-04T17:12:08.093141546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:08.094853 containerd[2019]: time="2024-09-04T17:12:08.094787298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:12:08.098423 containerd[2019]: time="2024-09-04T17:12:08.098318046Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:08.107115 containerd[2019]: time="2024-09-04T17:12:08.106864638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:08.110172 containerd[2019]: time="2024-09-04T17:12:08.109930590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.899136342s" Sep 4 17:12:08.110172 containerd[2019]: time="2024-09-04T17:12:08.109999746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:12:08.113328 containerd[2019]: time="2024-09-04T17:12:08.113154066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:12:08.169615 containerd[2019]: time="2024-09-04T17:12:08.169456914Z" level=info msg="CreateContainer within sandbox \"740aed415ce2629a47a96a612949f979947eef1535d5d282efd752fc2e4eb05c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:12:08.200238 containerd[2019]: time="2024-09-04T17:12:08.200173062Z" level=info msg="CreateContainer within sandbox \"740aed415ce2629a47a96a612949f979947eef1535d5d282efd752fc2e4eb05c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f9def028b0c69fe39ffb8a70b5bc60a8b84ab85750da8f2e623223174171b37\"" Sep 4 17:12:08.203661 containerd[2019]: time="2024-09-04T17:12:08.202244899Z" level=info msg="StartContainer for \"7f9def028b0c69fe39ffb8a70b5bc60a8b84ab85750da8f2e623223174171b37\"" Sep 4 17:12:08.273904 systemd[1]: Started cri-containerd-7f9def028b0c69fe39ffb8a70b5bc60a8b84ab85750da8f2e623223174171b37.scope - libcontainer container 7f9def028b0c69fe39ffb8a70b5bc60a8b84ab85750da8f2e623223174171b37. Sep 4 17:12:08.327838 kubelet[3211]: E0904 17:12:08.327781 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:08.376683 containerd[2019]: time="2024-09-04T17:12:08.376415491Z" level=info msg="StartContainer for \"7f9def028b0c69fe39ffb8a70b5bc60a8b84ab85750da8f2e623223174171b37\" returns successfully" Sep 4 17:12:08.583666 kubelet[3211]: E0904 17:12:08.583351 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.583666 kubelet[3211]: W0904 17:12:08.583383 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.583666 kubelet[3211]: E0904 17:12:08.583439 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.585002 kubelet[3211]: E0904 17:12:08.584485 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.585002 kubelet[3211]: W0904 17:12:08.584519 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.585002 kubelet[3211]: E0904 17:12:08.584683 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.585950 kubelet[3211]: E0904 17:12:08.585673 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.585950 kubelet[3211]: W0904 17:12:08.585703 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.585950 kubelet[3211]: E0904 17:12:08.585741 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.586702 kubelet[3211]: E0904 17:12:08.586417 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.586702 kubelet[3211]: W0904 17:12:08.586437 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.586702 kubelet[3211]: E0904 17:12:08.586467 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.587558 kubelet[3211]: E0904 17:12:08.587530 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.587997 kubelet[3211]: W0904 17:12:08.587753 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.587997 kubelet[3211]: E0904 17:12:08.587797 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.588849 kubelet[3211]: E0904 17:12:08.588708 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.588849 kubelet[3211]: W0904 17:12:08.588736 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.588849 kubelet[3211]: E0904 17:12:08.588768 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.590256 kubelet[3211]: E0904 17:12:08.589691 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.590256 kubelet[3211]: W0904 17:12:08.589720 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.590256 kubelet[3211]: E0904 17:12:08.589753 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.591393 kubelet[3211]: E0904 17:12:08.590999 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.591393 kubelet[3211]: W0904 17:12:08.591033 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.591393 kubelet[3211]: E0904 17:12:08.591067 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.592548 kubelet[3211]: E0904 17:12:08.592391 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.592548 kubelet[3211]: W0904 17:12:08.592422 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.592548 kubelet[3211]: E0904 17:12:08.592455 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.593404 kubelet[3211]: E0904 17:12:08.593184 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.593404 kubelet[3211]: W0904 17:12:08.593211 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.593404 kubelet[3211]: E0904 17:12:08.593242 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.594261 kubelet[3211]: E0904 17:12:08.594101 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.594261 kubelet[3211]: W0904 17:12:08.594130 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.594261 kubelet[3211]: E0904 17:12:08.594165 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.595356 kubelet[3211]: E0904 17:12:08.595014 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.595356 kubelet[3211]: W0904 17:12:08.595042 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.595356 kubelet[3211]: E0904 17:12:08.595074 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.596375 kubelet[3211]: E0904 17:12:08.596162 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.596375 kubelet[3211]: W0904 17:12:08.596191 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.596375 kubelet[3211]: E0904 17:12:08.596225 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.597158 kubelet[3211]: E0904 17:12:08.596943 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.597158 kubelet[3211]: W0904 17:12:08.596970 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.597158 kubelet[3211]: E0904 17:12:08.597002 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.597664 kubelet[3211]: E0904 17:12:08.597495 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.597664 kubelet[3211]: W0904 17:12:08.597515 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.597664 kubelet[3211]: E0904 17:12:08.597540 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.617353 kubelet[3211]: E0904 17:12:08.617165 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.617353 kubelet[3211]: W0904 17:12:08.617199 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.617353 kubelet[3211]: E0904 17:12:08.617238 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.618290 kubelet[3211]: E0904 17:12:08.618145 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.618290 kubelet[3211]: W0904 17:12:08.618178 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.618290 kubelet[3211]: E0904 17:12:08.618237 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.621433 kubelet[3211]: E0904 17:12:08.621152 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.621433 kubelet[3211]: W0904 17:12:08.621182 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.621433 kubelet[3211]: E0904 17:12:08.621240 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.623707 kubelet[3211]: E0904 17:12:08.622048 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.623707 kubelet[3211]: W0904 17:12:08.622076 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.624761 kubelet[3211]: E0904 17:12:08.624204 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.625221 kubelet[3211]: E0904 17:12:08.625038 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.625221 kubelet[3211]: W0904 17:12:08.625064 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.627028 kubelet[3211]: E0904 17:12:08.626802 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.628947 kubelet[3211]: E0904 17:12:08.628674 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.628947 kubelet[3211]: W0904 17:12:08.628710 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.628947 kubelet[3211]: E0904 17:12:08.628810 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.631345 kubelet[3211]: E0904 17:12:08.631295 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.631345 kubelet[3211]: W0904 17:12:08.631332 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.632211 kubelet[3211]: E0904 17:12:08.631486 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.632379 kubelet[3211]: E0904 17:12:08.632338 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.632379 kubelet[3211]: W0904 17:12:08.632372 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.632669 kubelet[3211]: E0904 17:12:08.632507 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.634057 kubelet[3211]: E0904 17:12:08.634007 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.634057 kubelet[3211]: W0904 17:12:08.634045 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.634358 kubelet[3211]: E0904 17:12:08.634199 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.635817 kubelet[3211]: E0904 17:12:08.635765 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.635817 kubelet[3211]: W0904 17:12:08.635803 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.636881 kubelet[3211]: E0904 17:12:08.635973 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.636881 kubelet[3211]: E0904 17:12:08.636235 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.636881 kubelet[3211]: W0904 17:12:08.636252 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.636881 kubelet[3211]: E0904 17:12:08.636654 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.639178 kubelet[3211]: E0904 17:12:08.639128 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.639178 kubelet[3211]: W0904 17:12:08.639165 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.639545 kubelet[3211]: E0904 17:12:08.639218 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.640978 kubelet[3211]: E0904 17:12:08.640928 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.640978 kubelet[3211]: W0904 17:12:08.640966 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.641638 kubelet[3211]: E0904 17:12:08.641119 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.642031 kubelet[3211]: E0904 17:12:08.641983 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.642031 kubelet[3211]: W0904 17:12:08.642017 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.642309 kubelet[3211]: E0904 17:12:08.642090 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.643812 kubelet[3211]: E0904 17:12:08.643752 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.643812 kubelet[3211]: W0904 17:12:08.643791 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.643812 kubelet[3211]: E0904 17:12:08.643869 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.644755 kubelet[3211]: E0904 17:12:08.644713 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.644755 kubelet[3211]: W0904 17:12:08.644747 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.645021 kubelet[3211]: E0904 17:12:08.644795 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.646083 kubelet[3211]: E0904 17:12:08.646028 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.646083 kubelet[3211]: W0904 17:12:08.646065 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.646248 kubelet[3211]: E0904 17:12:08.646109 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.647218 kubelet[3211]: E0904 17:12:08.647170 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:08.647218 kubelet[3211]: W0904 17:12:08.647206 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:08.647419 kubelet[3211]: E0904 17:12:08.647244 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:08.997469 kubelet[3211]: I0904 17:12:08.996884 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-d849bdccd-57vj8" podStartSLOduration=2.09381148 podCreationTimestamp="2024-09-04 17:12:04 +0000 UTC" firstStartedPulling="2024-09-04 17:12:05.208491868 +0000 UTC m=+22.128920343" lastFinishedPulling="2024-09-04 17:12:08.111504198 +0000 UTC m=+25.031932673" observedRunningTime="2024-09-04 17:12:08.548364464 +0000 UTC m=+25.468792963" watchObservedRunningTime="2024-09-04 17:12:08.99682381 +0000 UTC m=+25.917252393" Sep 4 17:12:09.595605 containerd[2019]: time="2024-09-04T17:12:09.595494813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:09.597787 containerd[2019]: time="2024-09-04T17:12:09.597713589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:12:09.600691 containerd[2019]: time="2024-09-04T17:12:09.600257109Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:09.605137 containerd[2019]: time="2024-09-04T17:12:09.604928313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:09.609026 kubelet[3211]: E0904 17:12:09.608823 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.609026 kubelet[3211]: W0904 17:12:09.608876 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.609026 kubelet[3211]: E0904 17:12:09.608920 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.611206 containerd[2019]: time="2024-09-04T17:12:09.609927621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.496705563s" Sep 4 17:12:09.611206 containerd[2019]: time="2024-09-04T17:12:09.610328973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:12:09.613102 kubelet[3211]: E0904 17:12:09.613039 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.613102 kubelet[3211]: W0904 17:12:09.613104 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.613316 kubelet[3211]: E0904 17:12:09.613147 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.616191 containerd[2019]: time="2024-09-04T17:12:09.615844210Z" level=info msg="CreateContainer within sandbox \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:12:09.617650 kubelet[3211]: E0904 17:12:09.617324 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.617650 kubelet[3211]: W0904 17:12:09.617361 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.617650 kubelet[3211]: E0904 17:12:09.617401 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.621044 kubelet[3211]: E0904 17:12:09.619365 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.621044 kubelet[3211]: W0904 17:12:09.619400 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.621044 kubelet[3211]: E0904 17:12:09.619438 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.621044 kubelet[3211]: E0904 17:12:09.620141 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.621044 kubelet[3211]: W0904 17:12:09.620168 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.621044 kubelet[3211]: E0904 17:12:09.620201 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.621044 kubelet[3211]: E0904 17:12:09.620924 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.621044 kubelet[3211]: W0904 17:12:09.620983 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.621566 kubelet[3211]: E0904 17:12:09.621079 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.624173 kubelet[3211]: E0904 17:12:09.624120 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.624173 kubelet[3211]: W0904 17:12:09.624158 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.624366 kubelet[3211]: E0904 17:12:09.624198 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.625177 kubelet[3211]: E0904 17:12:09.624761 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.625177 kubelet[3211]: W0904 17:12:09.624792 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.625177 kubelet[3211]: E0904 17:12:09.625002 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.626108 kubelet[3211]: E0904 17:12:09.625933 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.626108 kubelet[3211]: W0904 17:12:09.625995 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.626108 kubelet[3211]: E0904 17:12:09.626031 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.626854 kubelet[3211]: E0904 17:12:09.626793 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.626854 kubelet[3211]: W0904 17:12:09.626835 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.627122 kubelet[3211]: E0904 17:12:09.626870 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.627535 kubelet[3211]: E0904 17:12:09.627491 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.627657 kubelet[3211]: W0904 17:12:09.627523 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.627717 kubelet[3211]: E0904 17:12:09.627679 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.628786 kubelet[3211]: E0904 17:12:09.628155 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.628786 kubelet[3211]: W0904 17:12:09.628184 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.628786 kubelet[3211]: E0904 17:12:09.628241 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.628786 kubelet[3211]: E0904 17:12:09.628774 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.629135 kubelet[3211]: W0904 17:12:09.628809 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.629135 kubelet[3211]: E0904 17:12:09.628840 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.629135 kubelet[3211]: E0904 17:12:09.629268 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.629135 kubelet[3211]: W0904 17:12:09.629287 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.629135 kubelet[3211]: E0904 17:12:09.629312 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.630011 kubelet[3211]: E0904 17:12:09.629860 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.630011 kubelet[3211]: W0904 17:12:09.629884 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.630011 kubelet[3211]: E0904 17:12:09.629942 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.633974 kubelet[3211]: E0904 17:12:09.633540 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.633974 kubelet[3211]: W0904 17:12:09.633642 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.633974 kubelet[3211]: E0904 17:12:09.633680 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.634681 kubelet[3211]: E0904 17:12:09.634217 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.634681 kubelet[3211]: W0904 17:12:09.634248 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.634681 kubelet[3211]: E0904 17:12:09.634299 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.634899 kubelet[3211]: E0904 17:12:09.634764 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.634899 kubelet[3211]: W0904 17:12:09.634783 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.634899 kubelet[3211]: E0904 17:12:09.634827 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.637227 kubelet[3211]: E0904 17:12:09.635207 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.637227 kubelet[3211]: W0904 17:12:09.635236 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.637227 kubelet[3211]: E0904 17:12:09.635286 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.637227 kubelet[3211]: E0904 17:12:09.635717 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.637227 kubelet[3211]: W0904 17:12:09.635751 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.637227 kubelet[3211]: E0904 17:12:09.635808 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.637227 kubelet[3211]: E0904 17:12:09.636943 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.637227 kubelet[3211]: W0904 17:12:09.636974 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.637227 kubelet[3211]: E0904 17:12:09.637021 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.644160 kubelet[3211]: E0904 17:12:09.643891 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.644160 kubelet[3211]: W0904 17:12:09.644048 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.644510 kubelet[3211]: E0904 17:12:09.644350 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.645874 kubelet[3211]: E0904 17:12:09.645625 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.645874 kubelet[3211]: W0904 17:12:09.645804 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.646046 kubelet[3211]: E0904 17:12:09.645979 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.648237 kubelet[3211]: E0904 17:12:09.647114 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.648237 kubelet[3211]: W0904 17:12:09.647156 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.650501 kubelet[3211]: E0904 17:12:09.649720 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.650501 kubelet[3211]: W0904 17:12:09.649874 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.650877 kubelet[3211]: E0904 17:12:09.650826 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.650877 kubelet[3211]: W0904 17:12:09.650864 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.651058 kubelet[3211]: E0904 17:12:09.650902 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.652710 kubelet[3211]: E0904 17:12:09.652635 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.652710 kubelet[3211]: W0904 17:12:09.652693 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.652876 kubelet[3211]: E0904 17:12:09.652735 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.654140 kubelet[3211]: E0904 17:12:09.653687 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.654140 kubelet[3211]: W0904 17:12:09.653755 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.654140 kubelet[3211]: E0904 17:12:09.653794 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.656601 kubelet[3211]: E0904 17:12:09.655167 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.656601 kubelet[3211]: W0904 17:12:09.655197 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.656601 kubelet[3211]: E0904 17:12:09.655238 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.658619 kubelet[3211]: E0904 17:12:09.657247 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.658619 kubelet[3211]: W0904 17:12:09.657285 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.658619 kubelet[3211]: E0904 17:12:09.657325 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.662990 kubelet[3211]: E0904 17:12:09.662457 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.663230 kubelet[3211]: E0904 17:12:09.663174 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.663230 kubelet[3211]: W0904 17:12:09.663224 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.663337 kubelet[3211]: E0904 17:12:09.663260 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.672665 kubelet[3211]: E0904 17:12:09.671305 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.672665 kubelet[3211]: W0904 17:12:09.671343 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.672665 kubelet[3211]: E0904 17:12:09.671399 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.672665 kubelet[3211]: E0904 17:12:09.671456 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.679335 kubelet[3211]: E0904 17:12:09.678729 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:09.679335 kubelet[3211]: W0904 17:12:09.678762 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:09.679335 kubelet[3211]: E0904 17:12:09.678799 3211 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:09.679485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010703328.mount: Deactivated successfully. Sep 4 17:12:09.685618 containerd[2019]: time="2024-09-04T17:12:09.685118722Z" level=info msg="CreateContainer within sandbox \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9\"" Sep 4 17:12:09.689623 containerd[2019]: time="2024-09-04T17:12:09.687329326Z" level=info msg="StartContainer for \"f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9\"" Sep 4 17:12:09.782471 systemd[1]: Started cri-containerd-f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9.scope - libcontainer container f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9. Sep 4 17:12:09.866753 containerd[2019]: time="2024-09-04T17:12:09.865992467Z" level=info msg="StartContainer for \"f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9\" returns successfully" Sep 4 17:12:09.926853 systemd[1]: cri-containerd-f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9.scope: Deactivated successfully. Sep 4 17:12:10.003014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9-rootfs.mount: Deactivated successfully. Sep 4 17:12:10.329153 kubelet[3211]: E0904 17:12:10.327787 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:12.271558 containerd[2019]: time="2024-09-04T17:12:12.271458683Z" level=info msg="shim disconnected" id=f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9 namespace=k8s.io Sep 4 17:12:12.271558 containerd[2019]: time="2024-09-04T17:12:12.271544099Z" level=warning msg="cleaning up after shim disconnected" id=f61bfcd9f094b600fa44fc285fa9046b1461969f82babe879421c624ef7336f9 namespace=k8s.io Sep 4 17:12:12.271558 containerd[2019]: time="2024-09-04T17:12:12.271565651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:12.294649 containerd[2019]: time="2024-09-04T17:12:12.294316931Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:12:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:12:12.328220 kubelet[3211]: E0904 17:12:12.328151 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:12.550888 containerd[2019]: time="2024-09-04T17:12:12.550295256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:12:14.327920 kubelet[3211]: E0904 17:12:14.327853 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:16.328365 kubelet[3211]: E0904 17:12:16.328317 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:17.472993 containerd[2019]: time="2024-09-04T17:12:17.472908569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:17.474686 containerd[2019]: time="2024-09-04T17:12:17.474610289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:12:17.476565 containerd[2019]: time="2024-09-04T17:12:17.476490677Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:17.482083 containerd[2019]: time="2024-09-04T17:12:17.482009813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:17.483221 containerd[2019]: time="2024-09-04T17:12:17.482946437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.932591037s" Sep 4 17:12:17.483221 containerd[2019]: time="2024-09-04T17:12:17.483001973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:12:17.489179 containerd[2019]: time="2024-09-04T17:12:17.488709173Z" level=info msg="CreateContainer within sandbox \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:12:17.513894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046133820.mount: Deactivated successfully. Sep 4 17:12:17.520632 containerd[2019]: time="2024-09-04T17:12:17.519410021Z" level=info msg="CreateContainer within sandbox \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30\"" Sep 4 17:12:17.525476 containerd[2019]: time="2024-09-04T17:12:17.525350777Z" level=info msg="StartContainer for \"236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30\"" Sep 4 17:12:17.600911 systemd[1]: Started cri-containerd-236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30.scope - libcontainer container 236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30. Sep 4 17:12:17.658879 containerd[2019]: time="2024-09-04T17:12:17.658748981Z" level=info msg="StartContainer for \"236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30\" returns successfully" Sep 4 17:12:18.327933 kubelet[3211]: E0904 17:12:18.327865 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:20.327946 kubelet[3211]: E0904 17:12:20.327892 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:20.906134 containerd[2019]: time="2024-09-04T17:12:20.906065482Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:12:20.910500 systemd[1]: cri-containerd-236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30.scope: Deactivated successfully. Sep 4 17:12:20.958445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30-rootfs.mount: Deactivated successfully. Sep 4 17:12:20.988962 kubelet[3211]: I0904 17:12:20.988836 3211 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:12:21.028735 kubelet[3211]: I0904 17:12:21.027410 3211 topology_manager.go:215] "Topology Admit Handler" podUID="ae4abada-c0d1-4d32-85d1-deba613db57a" podNamespace="kube-system" podName="coredns-5dd5756b68-lt6bq" Sep 4 17:12:21.040696 kubelet[3211]: I0904 17:12:21.039128 3211 topology_manager.go:215] "Topology Admit Handler" podUID="863ff27d-a93d-4e40-81b6-a08a0a0a3b4d" podNamespace="kube-system" podName="coredns-5dd5756b68-5wbnj" Sep 4 17:12:21.042819 kubelet[3211]: I0904 17:12:21.042773 3211 topology_manager.go:215] "Topology Admit Handler" podUID="0a5993e3-01c7-40e2-a922-ad294021ce88" podNamespace="calico-system" podName="calico-kube-controllers-69487b4b9-mwnnc" Sep 4 17:12:21.052425 systemd[1]: Created slice kubepods-burstable-podae4abada_c0d1_4d32_85d1_deba613db57a.slice - libcontainer container kubepods-burstable-podae4abada_c0d1_4d32_85d1_deba613db57a.slice. Sep 4 17:12:21.073971 systemd[1]: Created slice kubepods-besteffort-pod0a5993e3_01c7_40e2_a922_ad294021ce88.slice - libcontainer container kubepods-besteffort-pod0a5993e3_01c7_40e2_a922_ad294021ce88.slice. Sep 4 17:12:21.091163 systemd[1]: Created slice kubepods-burstable-pod863ff27d_a93d_4e40_81b6_a08a0a0a3b4d.slice - libcontainer container kubepods-burstable-pod863ff27d_a93d_4e40_81b6_a08a0a0a3b4d.slice. Sep 4 17:12:21.221250 kubelet[3211]: I0904 17:12:21.221119 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5628\" (UniqueName: \"kubernetes.io/projected/ae4abada-c0d1-4d32-85d1-deba613db57a-kube-api-access-k5628\") pod \"coredns-5dd5756b68-lt6bq\" (UID: \"ae4abada-c0d1-4d32-85d1-deba613db57a\") " pod="kube-system/coredns-5dd5756b68-lt6bq" Sep 4 17:12:21.222125 kubelet[3211]: I0904 17:12:21.221851 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wsd\" (UniqueName: \"kubernetes.io/projected/0a5993e3-01c7-40e2-a922-ad294021ce88-kube-api-access-l6wsd\") pod \"calico-kube-controllers-69487b4b9-mwnnc\" (UID: \"0a5993e3-01c7-40e2-a922-ad294021ce88\") " pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" Sep 4 17:12:21.222513 kubelet[3211]: I0904 17:12:21.222482 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae4abada-c0d1-4d32-85d1-deba613db57a-config-volume\") pod \"coredns-5dd5756b68-lt6bq\" (UID: \"ae4abada-c0d1-4d32-85d1-deba613db57a\") " pod="kube-system/coredns-5dd5756b68-lt6bq" Sep 4 17:12:21.222777 kubelet[3211]: I0904 17:12:21.222548 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/863ff27d-a93d-4e40-81b6-a08a0a0a3b4d-config-volume\") pod \"coredns-5dd5756b68-5wbnj\" (UID: \"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d\") " pod="kube-system/coredns-5dd5756b68-5wbnj" Sep 4 17:12:21.222777 kubelet[3211]: I0904 17:12:21.222619 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl7wn\" (UniqueName: \"kubernetes.io/projected/863ff27d-a93d-4e40-81b6-a08a0a0a3b4d-kube-api-access-dl7wn\") pod \"coredns-5dd5756b68-5wbnj\" (UID: \"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d\") " pod="kube-system/coredns-5dd5756b68-5wbnj" Sep 4 17:12:21.222777 kubelet[3211]: I0904 17:12:21.222721 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5993e3-01c7-40e2-a922-ad294021ce88-tigera-ca-bundle\") pod \"calico-kube-controllers-69487b4b9-mwnnc\" (UID: \"0a5993e3-01c7-40e2-a922-ad294021ce88\") " pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" Sep 4 17:12:21.382334 containerd[2019]: time="2024-09-04T17:12:21.382272176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69487b4b9-mwnnc,Uid:0a5993e3-01c7-40e2-a922-ad294021ce88,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:21.405455 containerd[2019]: time="2024-09-04T17:12:21.405079088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5wbnj,Uid:863ff27d-a93d-4e40-81b6-a08a0a0a3b4d,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:21.665845 containerd[2019]: time="2024-09-04T17:12:21.665766285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lt6bq,Uid:ae4abada-c0d1-4d32-85d1-deba613db57a,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:22.068098 containerd[2019]: time="2024-09-04T17:12:22.067899931Z" level=error msg="Failed to destroy network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.073090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a-shm.mount: Deactivated successfully. Sep 4 17:12:22.107501 containerd[2019]: time="2024-09-04T17:12:22.107414576Z" level=error msg="encountered an error cleaning up failed sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.107688 containerd[2019]: time="2024-09-04T17:12:22.107522552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69487b4b9-mwnnc,Uid:0a5993e3-01c7-40e2-a922-ad294021ce88,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.108089 kubelet[3211]: E0904 17:12:22.108035 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.108705 kubelet[3211]: E0904 17:12:22.108128 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" Sep 4 17:12:22.108705 kubelet[3211]: E0904 17:12:22.108173 3211 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" Sep 4 17:12:22.108705 kubelet[3211]: E0904 17:12:22.108277 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69487b4b9-mwnnc_calico-system(0a5993e3-01c7-40e2-a922-ad294021ce88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69487b4b9-mwnnc_calico-system(0a5993e3-01c7-40e2-a922-ad294021ce88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" podUID="0a5993e3-01c7-40e2-a922-ad294021ce88" Sep 4 17:12:22.340249 systemd[1]: Created slice kubepods-besteffort-pod4c1c414c_401b_4001_b4a7_8eb90c8e06f7.slice - libcontainer container kubepods-besteffort-pod4c1c414c_401b_4001_b4a7_8eb90c8e06f7.slice. Sep 4 17:12:22.347696 containerd[2019]: time="2024-09-04T17:12:22.347627937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdjd2,Uid:4c1c414c-401b-4001-b4a7-8eb90c8e06f7,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:22.515791 containerd[2019]: time="2024-09-04T17:12:22.515703442Z" level=error msg="Failed to destroy network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.516526 containerd[2019]: time="2024-09-04T17:12:22.516465514Z" level=error msg="encountered an error cleaning up failed sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.516667 containerd[2019]: time="2024-09-04T17:12:22.516562762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5wbnj,Uid:863ff27d-a93d-4e40-81b6-a08a0a0a3b4d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.516951 kubelet[3211]: E0904 17:12:22.516906 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.517042 kubelet[3211]: E0904 17:12:22.516985 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-5wbnj" Sep 4 17:12:22.517042 kubelet[3211]: E0904 17:12:22.517026 3211 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-5wbnj" Sep 4 17:12:22.517757 kubelet[3211]: E0904 17:12:22.517174 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-5wbnj_kube-system(863ff27d-a93d-4e40-81b6-a08a0a0a3b4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-5wbnj_kube-system(863ff27d-a93d-4e40-81b6-a08a0a0a3b4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-5wbnj" podUID="863ff27d-a93d-4e40-81b6-a08a0a0a3b4d" Sep 4 17:12:22.590504 kubelet[3211]: I0904 17:12:22.590321 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:22.595248 containerd[2019]: time="2024-09-04T17:12:22.593261602Z" level=info msg="StopPodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\"" Sep 4 17:12:22.595248 containerd[2019]: time="2024-09-04T17:12:22.593674978Z" level=info msg="Ensure that sandbox 4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf in task-service has been cleanup successfully" Sep 4 17:12:22.595464 kubelet[3211]: I0904 17:12:22.593863 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:22.595540 containerd[2019]: time="2024-09-04T17:12:22.595473202Z" level=info msg="StopPodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\"" Sep 4 17:12:22.595913 containerd[2019]: time="2024-09-04T17:12:22.595847662Z" level=info msg="Ensure that sandbox e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a in task-service has been cleanup successfully" Sep 4 17:12:22.761247 containerd[2019]: time="2024-09-04T17:12:22.758462375Z" level=error msg="StopPodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" failed" error="failed to destroy network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.761247 containerd[2019]: time="2024-09-04T17:12:22.759800003Z" level=error msg="StopPodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" failed" error="failed to destroy network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.761464 kubelet[3211]: E0904 17:12:22.759063 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:22.761464 kubelet[3211]: E0904 17:12:22.759227 3211 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf"} Sep 4 17:12:22.761464 kubelet[3211]: E0904 17:12:22.759340 3211 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:22.761464 kubelet[3211]: E0904 17:12:22.759442 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-5wbnj" podUID="863ff27d-a93d-4e40-81b6-a08a0a0a3b4d" Sep 4 17:12:22.761829 kubelet[3211]: E0904 17:12:22.760062 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:22.761829 kubelet[3211]: E0904 17:12:22.760108 3211 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a"} Sep 4 17:12:22.761829 kubelet[3211]: E0904 17:12:22.760165 3211 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a5993e3-01c7-40e2-a922-ad294021ce88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:22.761829 kubelet[3211]: E0904 17:12:22.760230 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a5993e3-01c7-40e2-a922-ad294021ce88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" podUID="0a5993e3-01c7-40e2-a922-ad294021ce88" Sep 4 17:12:22.848869 containerd[2019]: time="2024-09-04T17:12:22.848614919Z" level=error msg="Failed to destroy network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.850043 containerd[2019]: time="2024-09-04T17:12:22.849972671Z" level=error msg="encountered an error cleaning up failed sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.850158 containerd[2019]: time="2024-09-04T17:12:22.850068743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lt6bq,Uid:ae4abada-c0d1-4d32-85d1-deba613db57a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.850627 kubelet[3211]: E0904 17:12:22.850514 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:22.850810 kubelet[3211]: E0904 17:12:22.850677 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lt6bq" Sep 4 17:12:22.850810 kubelet[3211]: E0904 17:12:22.850723 3211 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lt6bq" Sep 4 17:12:22.850810 kubelet[3211]: E0904 17:12:22.850818 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-lt6bq_kube-system(ae4abada-c0d1-4d32-85d1-deba613db57a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-lt6bq_kube-system(ae4abada-c0d1-4d32-85d1-deba613db57a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lt6bq" podUID="ae4abada-c0d1-4d32-85d1-deba613db57a" Sep 4 17:12:23.014316 containerd[2019]: time="2024-09-04T17:12:23.014104508Z" level=info msg="shim disconnected" id=236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30 namespace=k8s.io Sep 4 17:12:23.014316 containerd[2019]: time="2024-09-04T17:12:23.014289512Z" level=warning msg="cleaning up after shim disconnected" id=236f46eaffa4e8248cc429d3fd25ffaa1d8db7836098b868647427d4c151eb30 namespace=k8s.io Sep 4 17:12:23.014316 containerd[2019]: time="2024-09-04T17:12:23.014314256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:23.075392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66-shm.mount: Deactivated successfully. Sep 4 17:12:23.076064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf-shm.mount: Deactivated successfully. Sep 4 17:12:23.102642 containerd[2019]: time="2024-09-04T17:12:23.102429669Z" level=error msg="Failed to destroy network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:23.104043 containerd[2019]: time="2024-09-04T17:12:23.103967457Z" level=error msg="encountered an error cleaning up failed sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:23.104185 containerd[2019]: time="2024-09-04T17:12:23.104065341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdjd2,Uid:4c1c414c-401b-4001-b4a7-8eb90c8e06f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:23.106976 kubelet[3211]: E0904 17:12:23.106894 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:23.108712 kubelet[3211]: E0904 17:12:23.107060 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:23.108712 kubelet[3211]: E0904 17:12:23.107103 3211 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdjd2" Sep 4 17:12:23.108712 kubelet[3211]: E0904 17:12:23.107244 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sdjd2_calico-system(4c1c414c-401b-4001-b4a7-8eb90c8e06f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sdjd2_calico-system(4c1c414c-401b-4001-b4a7-8eb90c8e06f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:23.109975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2-shm.mount: Deactivated successfully. Sep 4 17:12:23.603081 kubelet[3211]: I0904 17:12:23.603018 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:23.606036 containerd[2019]: time="2024-09-04T17:12:23.605968631Z" level=info msg="StopPodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\"" Sep 4 17:12:23.606353 containerd[2019]: time="2024-09-04T17:12:23.606315155Z" level=info msg="Ensure that sandbox 7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2 in task-service has been cleanup successfully" Sep 4 17:12:23.609094 kubelet[3211]: I0904 17:12:23.607090 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:23.610863 containerd[2019]: time="2024-09-04T17:12:23.610811015Z" level=info msg="StopPodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\"" Sep 4 17:12:23.613010 containerd[2019]: time="2024-09-04T17:12:23.612021179Z" level=info msg="Ensure that sandbox f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66 in task-service has been cleanup successfully" Sep 4 17:12:23.621862 containerd[2019]: time="2024-09-04T17:12:23.620781155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:12:23.717362 containerd[2019]: time="2024-09-04T17:12:23.716943336Z" level=error msg="StopPodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" failed" error="failed to destroy network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:23.718059 kubelet[3211]: E0904 17:12:23.718028 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:23.718500 kubelet[3211]: E0904 17:12:23.718293 3211 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2"} Sep 4 17:12:23.718500 kubelet[3211]: E0904 17:12:23.718426 3211 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:23.718931 kubelet[3211]: E0904 17:12:23.718829 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c1c414c-401b-4001-b4a7-8eb90c8e06f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdjd2" podUID="4c1c414c-401b-4001-b4a7-8eb90c8e06f7" Sep 4 17:12:23.720699 containerd[2019]: time="2024-09-04T17:12:23.720623460Z" level=error msg="StopPodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" failed" error="failed to destroy network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:23.721436 kubelet[3211]: E0904 17:12:23.721034 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:23.721436 kubelet[3211]: E0904 17:12:23.721132 3211 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66"} Sep 4 17:12:23.721436 kubelet[3211]: E0904 17:12:23.721205 3211 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae4abada-c0d1-4d32-85d1-deba613db57a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:23.721436 kubelet[3211]: E0904 17:12:23.721259 3211 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae4abada-c0d1-4d32-85d1-deba613db57a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lt6bq" podUID="ae4abada-c0d1-4d32-85d1-deba613db57a" Sep 4 17:12:26.115670 systemd[1]: Started sshd@7-172.31.30.239:22-139.178.89.65:43622.service - OpenSSH per-connection server daemon (139.178.89.65:43622). Sep 4 17:12:26.344653 sshd[4468]: Accepted publickey for core from 139.178.89.65 port 43622 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:26.351454 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:26.370656 systemd-logind[1996]: New session 8 of user core. Sep 4 17:12:26.379868 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:12:26.852489 sshd[4468]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:26.867902 systemd[1]: sshd@7-172.31.30.239:22-139.178.89.65:43622.service: Deactivated successfully. Sep 4 17:12:26.877178 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:12:26.881321 systemd-logind[1996]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:12:26.886292 systemd-logind[1996]: Removed session 8. Sep 4 17:12:29.915989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794608888.mount: Deactivated successfully. Sep 4 17:12:30.381333 containerd[2019]: time="2024-09-04T17:12:30.381191729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:30.384238 containerd[2019]: time="2024-09-04T17:12:30.384160745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:12:30.390520 containerd[2019]: time="2024-09-04T17:12:30.390413501Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:30.400006 containerd[2019]: time="2024-09-04T17:12:30.399799685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:30.401260 containerd[2019]: time="2024-09-04T17:12:30.401039405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.780193306s" Sep 4 17:12:30.401260 containerd[2019]: time="2024-09-04T17:12:30.401113085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:12:30.428265 containerd[2019]: time="2024-09-04T17:12:30.427987109Z" level=info msg="CreateContainer within sandbox \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:12:30.577077 containerd[2019]: time="2024-09-04T17:12:30.577008870Z" level=info msg="CreateContainer within sandbox \"34c0e187814ce98d7468bdfd13414c9b4c1646eec8a2677fedb60147833f6b70\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6de14fcf7e605153339e82b5077a53c6a7b47edb482ec39b2288a91b7417e85d\"" Sep 4 17:12:30.579263 containerd[2019]: time="2024-09-04T17:12:30.579192210Z" level=info msg="StartContainer for \"6de14fcf7e605153339e82b5077a53c6a7b47edb482ec39b2288a91b7417e85d\"" Sep 4 17:12:30.644937 systemd[1]: Started cri-containerd-6de14fcf7e605153339e82b5077a53c6a7b47edb482ec39b2288a91b7417e85d.scope - libcontainer container 6de14fcf7e605153339e82b5077a53c6a7b47edb482ec39b2288a91b7417e85d. Sep 4 17:12:30.774830 containerd[2019]: time="2024-09-04T17:12:30.774721591Z" level=info msg="StartContainer for \"6de14fcf7e605153339e82b5077a53c6a7b47edb482ec39b2288a91b7417e85d\" returns successfully" Sep 4 17:12:30.937657 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:12:30.937796 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:12:31.892085 systemd[1]: Started sshd@8-172.31.30.239:22-139.178.89.65:45772.service - OpenSSH per-connection server daemon (139.178.89.65:45772). Sep 4 17:12:32.071201 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 45772 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:32.074009 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:32.082865 systemd-logind[1996]: New session 9 of user core. Sep 4 17:12:32.090859 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:12:32.327829 sshd[4576]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:32.333894 systemd[1]: sshd@8-172.31.30.239:22-139.178.89.65:45772.service: Deactivated successfully. Sep 4 17:12:32.337931 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:12:32.341240 systemd-logind[1996]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:12:32.344284 systemd-logind[1996]: Removed session 9. Sep 4 17:12:33.329115 containerd[2019]: time="2024-09-04T17:12:33.328966135Z" level=info msg="StopPodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\"" Sep 4 17:12:33.372669 kernel: bpftool[4745]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:12:33.482088 kubelet[3211]: I0904 17:12:33.479305 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-v9jnd" podStartSLOduration=4.351337987 podCreationTimestamp="2024-09-04 17:12:04 +0000 UTC" firstStartedPulling="2024-09-04 17:12:05.27387406 +0000 UTC m=+22.194302535" lastFinishedPulling="2024-09-04 17:12:30.401784941 +0000 UTC m=+47.322213416" observedRunningTime="2024-09-04 17:12:31.717739783 +0000 UTC m=+48.638168270" watchObservedRunningTime="2024-09-04 17:12:33.479248868 +0000 UTC m=+50.399677355" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.478 [INFO][4738] k8s.go 608: Cleaning up netns ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.481 [INFO][4738] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" iface="eth0" netns="/var/run/netns/cni-36d6dd78-e9dd-25ac-2834-fda7111a3ee2" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.481 [INFO][4738] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" iface="eth0" netns="/var/run/netns/cni-36d6dd78-e9dd-25ac-2834-fda7111a3ee2" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.483 [INFO][4738] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" iface="eth0" netns="/var/run/netns/cni-36d6dd78-e9dd-25ac-2834-fda7111a3ee2" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.484 [INFO][4738] k8s.go 615: Releasing IP address(es) ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.484 [INFO][4738] utils.go 188: Calico CNI releasing IP address ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.558 [INFO][4748] ipam_plugin.go 417: Releasing address using handleID ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.559 [INFO][4748] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.559 [INFO][4748] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.574 [WARNING][4748] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.575 [INFO][4748] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.577 [INFO][4748] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:33.591634 containerd[2019]: 2024-09-04 17:12:33.586 [INFO][4738] k8s.go 621: Teardown processing complete. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:33.592372 containerd[2019]: time="2024-09-04T17:12:33.591933249Z" level=info msg="TearDown network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" successfully" Sep 4 17:12:33.595857 containerd[2019]: time="2024-09-04T17:12:33.593637441Z" level=info msg="StopPodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" returns successfully" Sep 4 17:12:33.596722 containerd[2019]: time="2024-09-04T17:12:33.596605281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5wbnj,Uid:863ff27d-a93d-4e40-81b6-a08a0a0a3b4d,Namespace:kube-system,Attempt:1,}" Sep 4 17:12:33.598525 systemd[1]: run-netns-cni\x2d36d6dd78\x2de9dd\x2d25ac\x2d2834\x2dfda7111a3ee2.mount: Deactivated successfully. Sep 4 17:12:33.921178 systemd-networkd[1895]: vxlan.calico: Link UP Sep 4 17:12:33.921201 systemd-networkd[1895]: vxlan.calico: Gained carrier Sep 4 17:12:33.927300 (udev-worker)[4794]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:34.051490 (udev-worker)[4528]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:34.056750 systemd-networkd[1895]: calic45a7a9ad43: Link UP Sep 4 17:12:34.059699 systemd-networkd[1895]: calic45a7a9ad43: Gained carrier Sep 4 17:12:34.066180 (udev-worker)[4811]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.826 [INFO][4765] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0 coredns-5dd5756b68- kube-system 863ff27d-a93d-4e40-81b6-a08a0a0a3b4d 776 0 2024-09-04 17:11:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-239 coredns-5dd5756b68-5wbnj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic45a7a9ad43 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.827 [INFO][4765] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.914 [INFO][4785] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" HandleID="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.951 [INFO][4785] ipam_plugin.go 270: Auto assigning IP ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" HandleID="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317bb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-239", "pod":"coredns-5dd5756b68-5wbnj", "timestamp":"2024-09-04 17:12:33.914138338 +0000 UTC"}, Hostname:"ip-172-31-30-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.951 [INFO][4785] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.951 [INFO][4785] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.951 [INFO][4785] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-239' Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.960 [INFO][4785] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.974 [INFO][4785] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.985 [INFO][4785] ipam.go 489: Trying affinity for 192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.988 [INFO][4785] ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.995 [INFO][4785] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.995 [INFO][4785] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:33.999 [INFO][4785] ipam.go 1685: Creating new handle: k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:34.010 [INFO][4785] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:34.022 [INFO][4785] ipam.go 1216: Successfully claimed IPs: [192.168.8.1/26] block=192.168.8.0/26 handle="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:34.022 [INFO][4785] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.1/26] handle="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" host="ip-172-31-30-239" Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:34.022 [INFO][4785] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:34.097064 containerd[2019]: 2024-09-04 17:12:34.022 [INFO][4785] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.1/26] IPv6=[] ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" HandleID="k8s-pod-network.b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.100935 containerd[2019]: 2024-09-04 17:12:34.029 [INFO][4765] k8s.go 386: Populated endpoint ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"", Pod:"coredns-5dd5756b68-5wbnj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic45a7a9ad43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:34.100935 containerd[2019]: 2024-09-04 17:12:34.029 [INFO][4765] k8s.go 387: Calico CNI using IPs: [192.168.8.1/32] ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.100935 containerd[2019]: 2024-09-04 17:12:34.030 [INFO][4765] dataplane_linux.go 68: Setting the host side veth name to calic45a7a9ad43 ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.100935 containerd[2019]: 2024-09-04 17:12:34.061 [INFO][4765] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.100935 containerd[2019]: 2024-09-04 17:12:34.062 [INFO][4765] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf", Pod:"coredns-5dd5756b68-5wbnj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic45a7a9ad43", MAC:"8a:76:17:24:e1:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:34.100935 containerd[2019]: 2024-09-04 17:12:34.085 [INFO][4765] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf" Namespace="kube-system" Pod="coredns-5dd5756b68-5wbnj" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:34.198465 containerd[2019]: time="2024-09-04T17:12:34.197952608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:34.198465 containerd[2019]: time="2024-09-04T17:12:34.198093248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:34.198465 containerd[2019]: time="2024-09-04T17:12:34.198279416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:34.199427 containerd[2019]: time="2024-09-04T17:12:34.199139588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:34.259900 systemd[1]: Started cri-containerd-b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf.scope - libcontainer container b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf. Sep 4 17:12:34.336302 containerd[2019]: time="2024-09-04T17:12:34.336246848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5wbnj,Uid:863ff27d-a93d-4e40-81b6-a08a0a0a3b4d,Namespace:kube-system,Attempt:1,} returns sandbox id \"b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf\"" Sep 4 17:12:34.344402 containerd[2019]: time="2024-09-04T17:12:34.344341976Z" level=info msg="CreateContainer within sandbox \"b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:12:34.385005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473923774.mount: Deactivated successfully. Sep 4 17:12:34.385751 containerd[2019]: time="2024-09-04T17:12:34.385353933Z" level=info msg="CreateContainer within sandbox \"b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45519576e48cd7bf7146538787947cbc5e9a38927a506d11e5f5f843e2e55f7e\"" Sep 4 17:12:34.388714 containerd[2019]: time="2024-09-04T17:12:34.387067953Z" level=info msg="StartContainer for \"45519576e48cd7bf7146538787947cbc5e9a38927a506d11e5f5f843e2e55f7e\"" Sep 4 17:12:34.456897 systemd[1]: Started cri-containerd-45519576e48cd7bf7146538787947cbc5e9a38927a506d11e5f5f843e2e55f7e.scope - libcontainer container 45519576e48cd7bf7146538787947cbc5e9a38927a506d11e5f5f843e2e55f7e. Sep 4 17:12:34.565692 containerd[2019]: time="2024-09-04T17:12:34.565095849Z" level=info msg="StartContainer for \"45519576e48cd7bf7146538787947cbc5e9a38927a506d11e5f5f843e2e55f7e\" returns successfully" Sep 4 17:12:35.328411 containerd[2019]: time="2024-09-04T17:12:35.328356681Z" level=info msg="StopPodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\"" Sep 4 17:12:35.331476 containerd[2019]: time="2024-09-04T17:12:35.330257937Z" level=info msg="StopPodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\"" Sep 4 17:12:35.333197 containerd[2019]: time="2024-09-04T17:12:35.332767161Z" level=info msg="StopPodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\"" Sep 4 17:12:35.585131 kubelet[3211]: I0904 17:12:35.584964 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5wbnj" podStartSLOduration=39.584909159 podCreationTimestamp="2024-09-04 17:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:34.704791714 +0000 UTC m=+51.625220213" watchObservedRunningTime="2024-09-04 17:12:35.584909159 +0000 UTC m=+52.505337634" Sep 4 17:12:35.670928 systemd-networkd[1895]: calic45a7a9ad43: Gained IPv6LL Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.587 [INFO][4999] k8s.go 608: Cleaning up netns ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.588 [INFO][4999] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" iface="eth0" netns="/var/run/netns/cni-b8418164-b12b-06e8-1533-c0b0d0342c47" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.589 [INFO][4999] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" iface="eth0" netns="/var/run/netns/cni-b8418164-b12b-06e8-1533-c0b0d0342c47" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.590 [INFO][4999] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" iface="eth0" netns="/var/run/netns/cni-b8418164-b12b-06e8-1533-c0b0d0342c47" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.590 [INFO][4999] k8s.go 615: Releasing IP address(es) ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.590 [INFO][4999] utils.go 188: Calico CNI releasing IP address ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.715 [INFO][5009] ipam_plugin.go 417: Releasing address using handleID ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.715 [INFO][5009] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.715 [INFO][5009] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.743 [WARNING][5009] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.743 [INFO][5009] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.749 [INFO][5009] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:35.763032 containerd[2019]: 2024-09-04 17:12:35.760 [INFO][4999] k8s.go 621: Teardown processing complete. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:35.769350 containerd[2019]: time="2024-09-04T17:12:35.766648463Z" level=info msg="TearDown network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" successfully" Sep 4 17:12:35.769350 containerd[2019]: time="2024-09-04T17:12:35.766697183Z" level=info msg="StopPodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" returns successfully" Sep 4 17:12:35.770089 systemd[1]: run-netns-cni\x2db8418164\x2db12b\x2d06e8\x2d1533\x2dc0b0d0342c47.mount: Deactivated successfully. Sep 4 17:12:35.772349 containerd[2019]: time="2024-09-04T17:12:35.770385455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69487b4b9-mwnnc,Uid:0a5993e3-01c7-40e2-a922-ad294021ce88,Namespace:calico-system,Attempt:1,}" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.591 [INFO][4988] k8s.go 608: Cleaning up netns ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.592 [INFO][4988] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" iface="eth0" netns="/var/run/netns/cni-089003cd-000d-91f4-2a46-c50859e8acad" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.597 [INFO][4988] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" iface="eth0" netns="/var/run/netns/cni-089003cd-000d-91f4-2a46-c50859e8acad" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.600 [INFO][4988] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" iface="eth0" netns="/var/run/netns/cni-089003cd-000d-91f4-2a46-c50859e8acad" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.600 [INFO][4988] k8s.go 615: Releasing IP address(es) ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.601 [INFO][4988] utils.go 188: Calico CNI releasing IP address ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.726 [INFO][5014] ipam_plugin.go 417: Releasing address using handleID ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.726 [INFO][5014] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.749 [INFO][5014] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.783 [WARNING][5014] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.783 [INFO][5014] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.790 [INFO][5014] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:35.802900 containerd[2019]: 2024-09-04 17:12:35.797 [INFO][4988] k8s.go 621: Teardown processing complete. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:35.804993 containerd[2019]: time="2024-09-04T17:12:35.804709524Z" level=info msg="TearDown network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" successfully" Sep 4 17:12:35.804993 containerd[2019]: time="2024-09-04T17:12:35.804756924Z" level=info msg="StopPodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" returns successfully" Sep 4 17:12:35.809665 containerd[2019]: time="2024-09-04T17:12:35.807785304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdjd2,Uid:4c1c414c-401b-4001-b4a7-8eb90c8e06f7,Namespace:calico-system,Attempt:1,}" Sep 4 17:12:35.831148 systemd[1]: run-netns-cni\x2d089003cd\x2d000d\x2d91f4\x2d2a46\x2dc50859e8acad.mount: Deactivated successfully. Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.613 [INFO][4992] k8s.go 608: Cleaning up netns ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.616 [INFO][4992] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" iface="eth0" netns="/var/run/netns/cni-80f88065-2c94-205e-6d72-9fa1360e3133" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.617 [INFO][4992] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" iface="eth0" netns="/var/run/netns/cni-80f88065-2c94-205e-6d72-9fa1360e3133" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.618 [INFO][4992] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" iface="eth0" netns="/var/run/netns/cni-80f88065-2c94-205e-6d72-9fa1360e3133" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.618 [INFO][4992] k8s.go 615: Releasing IP address(es) ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.618 [INFO][4992] utils.go 188: Calico CNI releasing IP address ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.740 [INFO][5018] ipam_plugin.go 417: Releasing address using handleID ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.743 [INFO][5018] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.790 [INFO][5018] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.838 [WARNING][5018] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.842 [INFO][5018] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.846 [INFO][5018] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:35.863848 containerd[2019]: 2024-09-04 17:12:35.852 [INFO][4992] k8s.go 621: Teardown processing complete. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:35.863848 containerd[2019]: time="2024-09-04T17:12:35.863366832Z" level=info msg="TearDown network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" successfully" Sep 4 17:12:35.863848 containerd[2019]: time="2024-09-04T17:12:35.863407212Z" level=info msg="StopPodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" returns successfully" Sep 4 17:12:35.876187 containerd[2019]: time="2024-09-04T17:12:35.874549008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lt6bq,Uid:ae4abada-c0d1-4d32-85d1-deba613db57a,Namespace:kube-system,Attempt:1,}" Sep 4 17:12:35.875820 systemd[1]: run-netns-cni\x2d80f88065\x2d2c94\x2d205e\x2d6d72\x2d9fa1360e3133.mount: Deactivated successfully. Sep 4 17:12:35.927836 systemd-networkd[1895]: vxlan.calico: Gained IPv6LL Sep 4 17:12:36.351345 systemd-networkd[1895]: calie666717e2ea: Link UP Sep 4 17:12:36.360267 systemd-networkd[1895]: calie666717e2ea: Gained carrier Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.029 [INFO][5029] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0 calico-kube-controllers-69487b4b9- calico-system 0a5993e3-01c7-40e2-a922-ad294021ce88 798 0 2024-09-04 17:12:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69487b4b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-239 calico-kube-controllers-69487b4b9-mwnnc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie666717e2ea [] []}} ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.030 [INFO][5029] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.181 [INFO][5065] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" HandleID="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.232 [INFO][5065] ipam_plugin.go 270: Auto assigning IP ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" HandleID="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003148c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-239", "pod":"calico-kube-controllers-69487b4b9-mwnnc", "timestamp":"2024-09-04 17:12:36.181279893 +0000 UTC"}, Hostname:"ip-172-31-30-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.233 [INFO][5065] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.233 [INFO][5065] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.234 [INFO][5065] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-239' Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.243 [INFO][5065] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.262 [INFO][5065] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.301 [INFO][5065] ipam.go 489: Trying affinity for 192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.307 [INFO][5065] ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.313 [INFO][5065] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.314 [INFO][5065] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.316 [INFO][5065] ipam.go 1685: Creating new handle: k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.325 [INFO][5065] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.339 [INFO][5065] ipam.go 1216: Successfully claimed IPs: [192.168.8.2/26] block=192.168.8.0/26 handle="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.339 [INFO][5065] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.2/26] handle="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" host="ip-172-31-30-239" Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.340 [INFO][5065] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:36.391808 containerd[2019]: 2024-09-04 17:12:36.340 [INFO][5065] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.2/26] IPv6=[] ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" HandleID="k8s-pod-network.30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.394733 containerd[2019]: 2024-09-04 17:12:36.344 [INFO][5029] k8s.go 386: Populated endpoint ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0", GenerateName:"calico-kube-controllers-69487b4b9-", Namespace:"calico-system", SelfLink:"", UID:"0a5993e3-01c7-40e2-a922-ad294021ce88", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69487b4b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"", Pod:"calico-kube-controllers-69487b4b9-mwnnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie666717e2ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:36.394733 containerd[2019]: 2024-09-04 17:12:36.344 [INFO][5029] k8s.go 387: Calico CNI using IPs: [192.168.8.2/32] ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.394733 containerd[2019]: 2024-09-04 17:12:36.344 [INFO][5029] dataplane_linux.go 68: Setting the host side veth name to calie666717e2ea ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.394733 containerd[2019]: 2024-09-04 17:12:36.359 [INFO][5029] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.394733 containerd[2019]: 2024-09-04 17:12:36.362 [INFO][5029] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0", GenerateName:"calico-kube-controllers-69487b4b9-", Namespace:"calico-system", SelfLink:"", UID:"0a5993e3-01c7-40e2-a922-ad294021ce88", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69487b4b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df", Pod:"calico-kube-controllers-69487b4b9-mwnnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie666717e2ea", MAC:"1e:a5:50:ed:37:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:36.394733 containerd[2019]: 2024-09-04 17:12:36.386 [INFO][5029] k8s.go 500: Wrote updated endpoint to datastore ContainerID="30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df" Namespace="calico-system" Pod="calico-kube-controllers-69487b4b9-mwnnc" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:36.465177 systemd-networkd[1895]: caliac9f123ad76: Link UP Sep 4 17:12:36.467297 systemd-networkd[1895]: caliac9f123ad76: Gained carrier Sep 4 17:12:36.521891 containerd[2019]: time="2024-09-04T17:12:36.519218723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:36.521891 containerd[2019]: time="2024-09-04T17:12:36.519401699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.521891 containerd[2019]: time="2024-09-04T17:12:36.519516431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:36.529044 containerd[2019]: time="2024-09-04T17:12:36.519562823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.121 [INFO][5051] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0 coredns-5dd5756b68- kube-system ae4abada-c0d1-4d32-85d1-deba613db57a 800 0 2024-09-04 17:11:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-239 coredns-5dd5756b68-lt6bq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac9f123ad76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.122 [INFO][5051] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.260 [INFO][5075] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" HandleID="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.292 [INFO][5075] ipam_plugin.go 270: Auto assigning IP ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" HandleID="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031cbc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-239", "pod":"coredns-5dd5756b68-lt6bq", "timestamp":"2024-09-04 17:12:36.260119666 +0000 UTC"}, Hostname:"ip-172-31-30-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.293 [INFO][5075] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.340 [INFO][5075] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.340 [INFO][5075] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-239' Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.346 [INFO][5075] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.360 [INFO][5075] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.377 [INFO][5075] ipam.go 489: Trying affinity for 192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.389 [INFO][5075] ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.397 [INFO][5075] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.397 [INFO][5075] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.402 [INFO][5075] ipam.go 1685: Creating new handle: k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.411 [INFO][5075] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.427 [INFO][5075] ipam.go 1216: Successfully claimed IPs: [192.168.8.3/26] block=192.168.8.0/26 handle="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.428 [INFO][5075] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.3/26] handle="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" host="ip-172-31-30-239" Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.428 [INFO][5075] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:36.539902 containerd[2019]: 2024-09-04 17:12:36.428 [INFO][5075] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.3/26] IPv6=[] ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" HandleID="k8s-pod-network.49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.543234 containerd[2019]: 2024-09-04 17:12:36.442 [INFO][5051] k8s.go 386: Populated endpoint ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ae4abada-c0d1-4d32-85d1-deba613db57a", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"", Pod:"coredns-5dd5756b68-lt6bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac9f123ad76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:36.543234 containerd[2019]: 2024-09-04 17:12:36.443 [INFO][5051] k8s.go 387: Calico CNI using IPs: [192.168.8.3/32] ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.543234 containerd[2019]: 2024-09-04 17:12:36.443 [INFO][5051] dataplane_linux.go 68: Setting the host side veth name to caliac9f123ad76 ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.543234 containerd[2019]: 2024-09-04 17:12:36.473 [INFO][5051] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.543234 containerd[2019]: 2024-09-04 17:12:36.480 [INFO][5051] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ae4abada-c0d1-4d32-85d1-deba613db57a", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e", Pod:"coredns-5dd5756b68-lt6bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac9f123ad76", MAC:"ae:cd:84:77:a5:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:36.543234 containerd[2019]: 2024-09-04 17:12:36.528 [INFO][5051] k8s.go 500: Wrote updated endpoint to datastore ContainerID="49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e" Namespace="kube-system" Pod="coredns-5dd5756b68-lt6bq" WorkloadEndpoint="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:36.617912 systemd-networkd[1895]: cali3182303cedb: Link UP Sep 4 17:12:36.626894 systemd-networkd[1895]: cali3182303cedb: Gained carrier Sep 4 17:12:36.694430 containerd[2019]: time="2024-09-04T17:12:36.692455188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:36.694430 containerd[2019]: time="2024-09-04T17:12:36.692595084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.694430 containerd[2019]: time="2024-09-04T17:12:36.692648700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:36.694430 containerd[2019]: time="2024-09-04T17:12:36.692684652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.099 [INFO][5039] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0 csi-node-driver- calico-system 4c1c414c-401b-4001-b4a7-8eb90c8e06f7 799 0 2024-09-04 17:12:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-30-239 csi-node-driver-sdjd2 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3182303cedb [] []}} ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.101 [INFO][5039] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.262 [INFO][5071] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" HandleID="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.298 [INFO][5071] ipam_plugin.go 270: Auto assigning IP ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" HandleID="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004fe590), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-239", "pod":"csi-node-driver-sdjd2", "timestamp":"2024-09-04 17:12:36.261185218 +0000 UTC"}, Hostname:"ip-172-31-30-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.298 [INFO][5071] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.428 [INFO][5071] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.429 [INFO][5071] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-239' Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.438 [INFO][5071] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.456 [INFO][5071] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.500 [INFO][5071] ipam.go 489: Trying affinity for 192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.509 [INFO][5071] ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.516 [INFO][5071] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.517 [INFO][5071] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.529 [INFO][5071] ipam.go 1685: Creating new handle: k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3 Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.549 [INFO][5071] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.570 [INFO][5071] ipam.go 1216: Successfully claimed IPs: [192.168.8.4/26] block=192.168.8.0/26 handle="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.570 [INFO][5071] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.4/26] handle="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" host="ip-172-31-30-239" Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.570 [INFO][5071] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:36.719426 containerd[2019]: 2024-09-04 17:12:36.570 [INFO][5071] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.4/26] IPv6=[] ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" HandleID="k8s-pod-network.2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.722126 containerd[2019]: 2024-09-04 17:12:36.588 [INFO][5039] k8s.go 386: Populated endpoint ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c1c414c-401b-4001-b4a7-8eb90c8e06f7", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"", Pod:"csi-node-driver-sdjd2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3182303cedb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:36.722126 containerd[2019]: 2024-09-04 17:12:36.588 [INFO][5039] k8s.go 387: Calico CNI using IPs: [192.168.8.4/32] ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.722126 containerd[2019]: 2024-09-04 17:12:36.588 [INFO][5039] dataplane_linux.go 68: Setting the host side veth name to cali3182303cedb ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.722126 containerd[2019]: 2024-09-04 17:12:36.632 [INFO][5039] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.722126 containerd[2019]: 2024-09-04 17:12:36.639 [INFO][5039] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c1c414c-401b-4001-b4a7-8eb90c8e06f7", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3", Pod:"csi-node-driver-sdjd2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3182303cedb", MAC:"be:83:28:3e:a0:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:36.722126 containerd[2019]: 2024-09-04 17:12:36.669 [INFO][5039] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3" Namespace="calico-system" Pod="csi-node-driver-sdjd2" WorkloadEndpoint="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:36.756401 systemd[1]: Started cri-containerd-30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df.scope - libcontainer container 30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df. Sep 4 17:12:36.848768 systemd[1]: Started cri-containerd-49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e.scope - libcontainer container 49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e. Sep 4 17:12:36.856504 containerd[2019]: time="2024-09-04T17:12:36.856257289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:36.856504 containerd[2019]: time="2024-09-04T17:12:36.856441021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.860874 containerd[2019]: time="2024-09-04T17:12:36.856479337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:36.860874 containerd[2019]: time="2024-09-04T17:12:36.857203225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.920703 systemd[1]: Started cri-containerd-2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3.scope - libcontainer container 2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3. Sep 4 17:12:36.993310 containerd[2019]: time="2024-09-04T17:12:36.993234746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lt6bq,Uid:ae4abada-c0d1-4d32-85d1-deba613db57a,Namespace:kube-system,Attempt:1,} returns sandbox id \"49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e\"" Sep 4 17:12:37.008895 containerd[2019]: time="2024-09-04T17:12:37.008811538Z" level=info msg="CreateContainer within sandbox \"49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:12:37.044366 containerd[2019]: time="2024-09-04T17:12:37.044281534Z" level=info msg="CreateContainer within sandbox \"49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6dc295c065e62d594c9023a2e0fccd1636f4d1799a129c54edb645703a1a04b\"" Sep 4 17:12:37.045237 containerd[2019]: time="2024-09-04T17:12:37.045027814Z" level=info msg="StartContainer for \"f6dc295c065e62d594c9023a2e0fccd1636f4d1799a129c54edb645703a1a04b\"" Sep 4 17:12:37.168926 systemd[1]: Started cri-containerd-f6dc295c065e62d594c9023a2e0fccd1636f4d1799a129c54edb645703a1a04b.scope - libcontainer container f6dc295c065e62d594c9023a2e0fccd1636f4d1799a129c54edb645703a1a04b. Sep 4 17:12:37.172209 containerd[2019]: time="2024-09-04T17:12:37.172107082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdjd2,Uid:4c1c414c-401b-4001-b4a7-8eb90c8e06f7,Namespace:calico-system,Attempt:1,} returns sandbox id \"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3\"" Sep 4 17:12:37.203549 containerd[2019]: time="2024-09-04T17:12:37.202771091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:12:37.232961 containerd[2019]: time="2024-09-04T17:12:37.232885391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69487b4b9-mwnnc,Uid:0a5993e3-01c7-40e2-a922-ad294021ce88,Namespace:calico-system,Attempt:1,} returns sandbox id \"30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df\"" Sep 4 17:12:37.305464 containerd[2019]: time="2024-09-04T17:12:37.305400647Z" level=info msg="StartContainer for \"f6dc295c065e62d594c9023a2e0fccd1636f4d1799a129c54edb645703a1a04b\" returns successfully" Sep 4 17:12:37.375258 systemd[1]: Started sshd@9-172.31.30.239:22-139.178.89.65:45778.service - OpenSSH per-connection server daemon (139.178.89.65:45778). Sep 4 17:12:37.581782 sshd[5287]: Accepted publickey for core from 139.178.89.65 port 45778 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:37.585876 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:37.599351 systemd-logind[1996]: New session 10 of user core. Sep 4 17:12:37.608784 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:12:37.835049 kubelet[3211]: I0904 17:12:37.834902 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lt6bq" podStartSLOduration=41.834817382 podCreationTimestamp="2024-09-04 17:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:37.781909825 +0000 UTC m=+54.702338324" watchObservedRunningTime="2024-09-04 17:12:37.834817382 +0000 UTC m=+54.755245881" Sep 4 17:12:38.000799 sshd[5287]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:38.008466 systemd[1]: sshd@9-172.31.30.239:22-139.178.89.65:45778.service: Deactivated successfully. Sep 4 17:12:38.017215 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:12:38.022831 systemd-logind[1996]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:12:38.026497 systemd-logind[1996]: Removed session 10. Sep 4 17:12:38.230780 systemd-networkd[1895]: caliac9f123ad76: Gained IPv6LL Sep 4 17:12:38.295202 systemd-networkd[1895]: calie666717e2ea: Gained IPv6LL Sep 4 17:12:38.423379 systemd-networkd[1895]: cali3182303cedb: Gained IPv6LL Sep 4 17:12:39.326710 containerd[2019]: time="2024-09-04T17:12:39.326567137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:39.329918 containerd[2019]: time="2024-09-04T17:12:39.329847457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:12:39.331945 containerd[2019]: time="2024-09-04T17:12:39.331489165Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:39.341429 containerd[2019]: time="2024-09-04T17:12:39.341259517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:39.344168 containerd[2019]: time="2024-09-04T17:12:39.344084701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 2.140888054s" Sep 4 17:12:39.344376 containerd[2019]: time="2024-09-04T17:12:39.344186701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:12:39.347030 containerd[2019]: time="2024-09-04T17:12:39.346640113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:12:39.354614 containerd[2019]: time="2024-09-04T17:12:39.353628637Z" level=info msg="CreateContainer within sandbox \"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:12:39.398330 containerd[2019]: time="2024-09-04T17:12:39.398250121Z" level=info msg="CreateContainer within sandbox \"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2f061888b50adbe4ade02ad3c62a3dd72704e8fdb18bfa14da208f7d24837f11\"" Sep 4 17:12:39.404621 containerd[2019]: time="2024-09-04T17:12:39.400557013Z" level=info msg="StartContainer for \"2f061888b50adbe4ade02ad3c62a3dd72704e8fdb18bfa14da208f7d24837f11\"" Sep 4 17:12:39.406815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45997435.mount: Deactivated successfully. Sep 4 17:12:39.494949 systemd[1]: Started cri-containerd-2f061888b50adbe4ade02ad3c62a3dd72704e8fdb18bfa14da208f7d24837f11.scope - libcontainer container 2f061888b50adbe4ade02ad3c62a3dd72704e8fdb18bfa14da208f7d24837f11. Sep 4 17:12:39.602256 containerd[2019]: time="2024-09-04T17:12:39.602089514Z" level=info msg="StartContainer for \"2f061888b50adbe4ade02ad3c62a3dd72704e8fdb18bfa14da208f7d24837f11\" returns successfully" Sep 4 17:12:41.004765 ntpd[1988]: Listen normally on 8 vxlan.calico 192.168.8.0:123 Sep 4 17:12:41.005941 ntpd[1988]: 4 Sep 17:12:41 ntpd[1988]: Listen normally on 8 vxlan.calico 192.168.8.0:123 Sep 4 17:12:41.005941 ntpd[1988]: 4 Sep 17:12:41 ntpd[1988]: Listen normally on 9 vxlan.calico [fe80::64d8:f0ff:fe07:bb0d%4]:123 Sep 4 17:12:41.005941 ntpd[1988]: 4 Sep 17:12:41 ntpd[1988]: Listen normally on 10 calic45a7a9ad43 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:12:41.005941 ntpd[1988]: 4 Sep 17:12:41 ntpd[1988]: Listen normally on 11 calie666717e2ea [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:12:41.005941 ntpd[1988]: 4 Sep 17:12:41 ntpd[1988]: Listen normally on 12 caliac9f123ad76 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:12:41.005941 ntpd[1988]: 4 Sep 17:12:41 ntpd[1988]: Listen normally on 13 cali3182303cedb [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:12:41.004908 ntpd[1988]: Listen normally on 9 vxlan.calico [fe80::64d8:f0ff:fe07:bb0d%4]:123 Sep 4 17:12:41.004992 ntpd[1988]: Listen normally on 10 calic45a7a9ad43 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:12:41.005062 ntpd[1988]: Listen normally on 11 calie666717e2ea [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:12:41.005133 ntpd[1988]: Listen normally on 12 caliac9f123ad76 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:12:41.005229 ntpd[1988]: Listen normally on 13 cali3182303cedb [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:12:43.041173 systemd[1]: Started sshd@10-172.31.30.239:22-139.178.89.65:53094.service - OpenSSH per-connection server daemon (139.178.89.65:53094). Sep 4 17:12:43.252970 sshd[5360]: Accepted publickey for core from 139.178.89.65 port 53094 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:43.258257 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:43.274656 systemd-logind[1996]: New session 11 of user core. Sep 4 17:12:43.279245 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:12:43.375879 containerd[2019]: time="2024-09-04T17:12:43.375715313Z" level=info msg="StopPodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\"" Sep 4 17:12:43.656497 sshd[5360]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:43.668098 systemd[1]: sshd@10-172.31.30.239:22-139.178.89.65:53094.service: Deactivated successfully. Sep 4 17:12:43.674917 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:12:43.681373 systemd-logind[1996]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:12:43.701151 systemd[1]: Started sshd@11-172.31.30.239:22-139.178.89.65:53100.service - OpenSSH per-connection server daemon (139.178.89.65:53100). Sep 4 17:12:43.703301 systemd-logind[1996]: Removed session 11. Sep 4 17:12:43.852074 containerd[2019]: time="2024-09-04T17:12:43.850737872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:43.855346 containerd[2019]: time="2024-09-04T17:12:43.855299396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:12:43.855888 containerd[2019]: time="2024-09-04T17:12:43.855733400Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.556 [WARNING][5384] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c1c414c-401b-4001-b4a7-8eb90c8e06f7", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3", Pod:"csi-node-driver-sdjd2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3182303cedb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.557 [INFO][5384] k8s.go 608: Cleaning up netns ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.557 [INFO][5384] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" iface="eth0" netns="" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.557 [INFO][5384] k8s.go 615: Releasing IP address(es) ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.558 [INFO][5384] utils.go 188: Calico CNI releasing IP address ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.689 [INFO][5392] ipam_plugin.go 417: Releasing address using handleID ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.826 [INFO][5392] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.827 [INFO][5392] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.847 [WARNING][5392] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.847 [INFO][5392] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.851 [INFO][5392] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:43.864407 containerd[2019]: 2024-09-04 17:12:43.856 [INFO][5384] k8s.go 621: Teardown processing complete. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:43.864407 containerd[2019]: time="2024-09-04T17:12:43.864180308Z" level=info msg="TearDown network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" successfully" Sep 4 17:12:43.864407 containerd[2019]: time="2024-09-04T17:12:43.864217640Z" level=info msg="StopPodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" returns successfully" Sep 4 17:12:43.865447 containerd[2019]: time="2024-09-04T17:12:43.865216580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:43.866818 containerd[2019]: time="2024-09-04T17:12:43.866550296Z" level=info msg="RemovePodSandbox for \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\"" Sep 4 17:12:43.866818 containerd[2019]: time="2024-09-04T17:12:43.866673236Z" level=info msg="Forcibly stopping sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\"" Sep 4 17:12:43.869900 containerd[2019]: time="2024-09-04T17:12:43.869829956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 4.522790327s" Sep 4 17:12:43.870568 containerd[2019]: time="2024-09-04T17:12:43.869899004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:12:43.871641 containerd[2019]: time="2024-09-04T17:12:43.871018544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:12:43.895547 sshd[5401]: Accepted publickey for core from 139.178.89.65 port 53100 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:43.904375 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:43.926665 systemd-logind[1996]: New session 12 of user core. Sep 4 17:12:43.930329 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:12:43.967967 containerd[2019]: time="2024-09-04T17:12:43.966990764Z" level=info msg="CreateContainer within sandbox \"30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:12:44.049767 containerd[2019]: time="2024-09-04T17:12:44.048714809Z" level=info msg="CreateContainer within sandbox \"30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"71210e117acc042d3fa02eb81e0666e4abe74a7e92432abac27da900d5d0d90e\"" Sep 4 17:12:44.059344 containerd[2019]: time="2024-09-04T17:12:44.054999557Z" level=info msg="StartContainer for \"71210e117acc042d3fa02eb81e0666e4abe74a7e92432abac27da900d5d0d90e\"" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.021 [WARNING][5420] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c1c414c-401b-4001-b4a7-8eb90c8e06f7", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3", Pod:"csi-node-driver-sdjd2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3182303cedb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.022 [INFO][5420] k8s.go 608: Cleaning up netns ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.022 [INFO][5420] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" iface="eth0" netns="" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.022 [INFO][5420] k8s.go 615: Releasing IP address(es) ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.022 [INFO][5420] utils.go 188: Calico CNI releasing IP address ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.102 [INFO][5427] ipam_plugin.go 417: Releasing address using handleID ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.102 [INFO][5427] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.103 [INFO][5427] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.125 [WARNING][5427] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.125 [INFO][5427] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" HandleID="k8s-pod-network.7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Workload="ip--172--31--30--239-k8s-csi--node--driver--sdjd2-eth0" Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.135 [INFO][5427] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:44.147497 containerd[2019]: 2024-09-04 17:12:44.141 [INFO][5420] k8s.go 621: Teardown processing complete. ContainerID="7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2" Sep 4 17:12:44.149025 containerd[2019]: time="2024-09-04T17:12:44.148977833Z" level=info msg="TearDown network for sandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" successfully" Sep 4 17:12:44.183913 containerd[2019]: time="2024-09-04T17:12:44.183149969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:12:44.183913 containerd[2019]: time="2024-09-04T17:12:44.183324173Z" level=info msg="RemovePodSandbox \"7d040bbc46f59f067c6a61f11bf7cc3255a42e5c1459723f20617c969ed60fe2\" returns successfully" Sep 4 17:12:44.188987 containerd[2019]: time="2024-09-04T17:12:44.188218925Z" level=info msg="StopPodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\"" Sep 4 17:12:44.207777 systemd[1]: Started cri-containerd-71210e117acc042d3fa02eb81e0666e4abe74a7e92432abac27da900d5d0d90e.scope - libcontainer container 71210e117acc042d3fa02eb81e0666e4abe74a7e92432abac27da900d5d0d90e. Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.334 [WARNING][5477] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0", GenerateName:"calico-kube-controllers-69487b4b9-", Namespace:"calico-system", SelfLink:"", UID:"0a5993e3-01c7-40e2-a922-ad294021ce88", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69487b4b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df", Pod:"calico-kube-controllers-69487b4b9-mwnnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie666717e2ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.334 [INFO][5477] k8s.go 608: Cleaning up netns ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.334 [INFO][5477] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" iface="eth0" netns="" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.336 [INFO][5477] k8s.go 615: Releasing IP address(es) ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.336 [INFO][5477] utils.go 188: Calico CNI releasing IP address ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.414 [INFO][5484] ipam_plugin.go 417: Releasing address using handleID ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.414 [INFO][5484] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.415 [INFO][5484] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.436 [WARNING][5484] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.436 [INFO][5484] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.445 [INFO][5484] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:44.455156 containerd[2019]: 2024-09-04 17:12:44.450 [INFO][5477] k8s.go 621: Teardown processing complete. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.455156 containerd[2019]: time="2024-09-04T17:12:44.454571839Z" level=info msg="TearDown network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" successfully" Sep 4 17:12:44.455156 containerd[2019]: time="2024-09-04T17:12:44.454637911Z" level=info msg="StopPodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" returns successfully" Sep 4 17:12:44.459640 containerd[2019]: time="2024-09-04T17:12:44.456492931Z" level=info msg="RemovePodSandbox for \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\"" Sep 4 17:12:44.459640 containerd[2019]: time="2024-09-04T17:12:44.456545827Z" level=info msg="Forcibly stopping sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\"" Sep 4 17:12:44.588473 containerd[2019]: time="2024-09-04T17:12:44.588368047Z" level=info msg="StartContainer for \"71210e117acc042d3fa02eb81e0666e4abe74a7e92432abac27da900d5d0d90e\" returns successfully" Sep 4 17:12:44.840488 kubelet[3211]: I0904 17:12:44.837639 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69487b4b9-mwnnc" podStartSLOduration=34.205511843 podCreationTimestamp="2024-09-04 17:12:04 +0000 UTC" firstStartedPulling="2024-09-04 17:12:37.238415243 +0000 UTC m=+54.158843718" lastFinishedPulling="2024-09-04 17:12:43.870247796 +0000 UTC m=+60.790676355" observedRunningTime="2024-09-04 17:12:44.831194672 +0000 UTC m=+61.751623135" watchObservedRunningTime="2024-09-04 17:12:44.83734448 +0000 UTC m=+61.757772967" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.643 [WARNING][5503] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0", GenerateName:"calico-kube-controllers-69487b4b9-", Namespace:"calico-system", SelfLink:"", UID:"0a5993e3-01c7-40e2-a922-ad294021ce88", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69487b4b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"30b2faf9de54f3e9ee5114b62a4ceb4e88ac7641e2226fe53217d687b825c7df", Pod:"calico-kube-controllers-69487b4b9-mwnnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie666717e2ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.643 [INFO][5503] k8s.go 608: Cleaning up netns ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.643 [INFO][5503] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" iface="eth0" netns="" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.643 [INFO][5503] k8s.go 615: Releasing IP address(es) ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.644 [INFO][5503] utils.go 188: Calico CNI releasing IP address ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.758 [INFO][5519] ipam_plugin.go 417: Releasing address using handleID ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.758 [INFO][5519] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.758 [INFO][5519] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.785 [WARNING][5519] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.785 [INFO][5519] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" HandleID="k8s-pod-network.e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Workload="ip--172--31--30--239-k8s-calico--kube--controllers--69487b4b9--mwnnc-eth0" Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.791 [INFO][5519] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:44.849652 containerd[2019]: 2024-09-04 17:12:44.802 [INFO][5503] k8s.go 621: Teardown processing complete. ContainerID="e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a" Sep 4 17:12:44.849652 containerd[2019]: time="2024-09-04T17:12:44.844248645Z" level=info msg="TearDown network for sandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" successfully" Sep 4 17:12:44.890809 containerd[2019]: time="2024-09-04T17:12:44.890734149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:12:44.890933 containerd[2019]: time="2024-09-04T17:12:44.890836629Z" level=info msg="RemovePodSandbox \"e01d427bb2451e8caded0bb1d21a6c27d827db129c2ac9da901393e846cd651a\" returns successfully" Sep 4 17:12:44.895569 containerd[2019]: time="2024-09-04T17:12:44.892714401Z" level=info msg="StopPodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\"" Sep 4 17:12:44.964879 sshd[5401]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:44.972949 systemd[1]: run-containerd-runc-k8s.io-71210e117acc042d3fa02eb81e0666e4abe74a7e92432abac27da900d5d0d90e-runc.ij34RY.mount: Deactivated successfully. Sep 4 17:12:44.988264 systemd-logind[1996]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:12:44.992182 systemd[1]: sshd@11-172.31.30.239:22-139.178.89.65:53100.service: Deactivated successfully. Sep 4 17:12:45.007036 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:12:45.051780 systemd[1]: Started sshd@12-172.31.30.239:22-139.178.89.65:53112.service - OpenSSH per-connection server daemon (139.178.89.65:53112). Sep 4 17:12:45.056142 systemd-logind[1996]: Removed session 12. Sep 4 17:12:45.283293 sshd[5566]: Accepted publickey for core from 139.178.89.65 port 53112 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.125 [WARNING][5552] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf", Pod:"coredns-5dd5756b68-5wbnj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic45a7a9ad43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.126 [INFO][5552] k8s.go 608: Cleaning up netns ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.127 [INFO][5552] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" iface="eth0" netns="" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.127 [INFO][5552] k8s.go 615: Releasing IP address(es) ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.127 [INFO][5552] utils.go 188: Calico CNI releasing IP address ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.231 [INFO][5571] ipam_plugin.go 417: Releasing address using handleID ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.231 [INFO][5571] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.232 [INFO][5571] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.270 [WARNING][5571] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.270 [INFO][5571] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.276 [INFO][5571] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:45.287782 containerd[2019]: 2024-09-04 17:12:45.282 [INFO][5552] k8s.go 621: Teardown processing complete. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.288715 containerd[2019]: time="2024-09-04T17:12:45.287828083Z" level=info msg="TearDown network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" successfully" Sep 4 17:12:45.288715 containerd[2019]: time="2024-09-04T17:12:45.287866711Z" level=info msg="StopPodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" returns successfully" Sep 4 17:12:45.288468 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:45.290337 containerd[2019]: time="2024-09-04T17:12:45.290275699Z" level=info msg="RemovePodSandbox for \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\"" Sep 4 17:12:45.290505 containerd[2019]: time="2024-09-04T17:12:45.290337859Z" level=info msg="Forcibly stopping sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\"" Sep 4 17:12:45.310564 systemd-logind[1996]: New session 13 of user core. Sep 4 17:12:45.318051 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.456 [WARNING][5595] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"863ff27d-a93d-4e40-81b6-a08a0a0a3b4d", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"b403e37316092ef626790fa8f95de3c147ee49d8387c08ed8b2cf0bfcd0deddf", Pod:"coredns-5dd5756b68-5wbnj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic45a7a9ad43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.457 [INFO][5595] k8s.go 608: Cleaning up netns ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.458 [INFO][5595] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" iface="eth0" netns="" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.458 [INFO][5595] k8s.go 615: Releasing IP address(es) ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.459 [INFO][5595] utils.go 188: Calico CNI releasing IP address ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.540 [INFO][5610] ipam_plugin.go 417: Releasing address using handleID ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.542 [INFO][5610] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.542 [INFO][5610] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.594 [WARNING][5610] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.594 [INFO][5610] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" HandleID="k8s-pod-network.4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--5wbnj-eth0" Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.604 [INFO][5610] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:45.615638 containerd[2019]: 2024-09-04 17:12:45.610 [INFO][5595] k8s.go 621: Teardown processing complete. ContainerID="4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf" Sep 4 17:12:45.615638 containerd[2019]: time="2024-09-04T17:12:45.615498056Z" level=info msg="TearDown network for sandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" successfully" Sep 4 17:12:45.622310 containerd[2019]: time="2024-09-04T17:12:45.622158068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:12:45.622452 containerd[2019]: time="2024-09-04T17:12:45.622335836Z" level=info msg="RemovePodSandbox \"4e0057b4938138b0a97d9dc4a19e43f3e5cba3c4367573cf168d5a52d672b0bf\" returns successfully" Sep 4 17:12:45.623038 containerd[2019]: time="2024-09-04T17:12:45.622981592Z" level=info msg="StopPodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\"" Sep 4 17:12:45.704572 sshd[5566]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:45.719790 systemd[1]: sshd@12-172.31.30.239:22-139.178.89.65:53112.service: Deactivated successfully. Sep 4 17:12:45.727335 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:12:45.730523 systemd-logind[1996]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:12:45.734564 systemd-logind[1996]: Removed session 13. Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.757 [WARNING][5630] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ae4abada-c0d1-4d32-85d1-deba613db57a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e", Pod:"coredns-5dd5756b68-lt6bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac9f123ad76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.758 [INFO][5630] k8s.go 608: Cleaning up netns ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.758 [INFO][5630] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" iface="eth0" netns="" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.758 [INFO][5630] k8s.go 615: Releasing IP address(es) ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.758 [INFO][5630] utils.go 188: Calico CNI releasing IP address ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.844 [INFO][5638] ipam_plugin.go 417: Releasing address using handleID ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.845 [INFO][5638] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.845 [INFO][5638] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.877 [WARNING][5638] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.877 [INFO][5638] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.886 [INFO][5638] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:45.904243 containerd[2019]: 2024-09-04 17:12:45.897 [INFO][5630] k8s.go 621: Teardown processing complete. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:45.905784 containerd[2019]: time="2024-09-04T17:12:45.905183782Z" level=info msg="TearDown network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" successfully" Sep 4 17:12:45.905784 containerd[2019]: time="2024-09-04T17:12:45.905235118Z" level=info msg="StopPodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" returns successfully" Sep 4 17:12:45.907912 containerd[2019]: time="2024-09-04T17:12:45.907311574Z" level=info msg="RemovePodSandbox for \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\"" Sep 4 17:12:45.907912 containerd[2019]: time="2024-09-04T17:12:45.907383286Z" level=info msg="Forcibly stopping sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\"" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.127 [WARNING][5675] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ae4abada-c0d1-4d32-85d1-deba613db57a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"49f1fb190a9461b521f00daa4150f6e1f367f76b27ab728a363c5118212ae26e", Pod:"coredns-5dd5756b68-lt6bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac9f123ad76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.130 [INFO][5675] k8s.go 608: Cleaning up netns ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.131 [INFO][5675] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" iface="eth0" netns="" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.132 [INFO][5675] k8s.go 615: Releasing IP address(es) ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.132 [INFO][5675] utils.go 188: Calico CNI releasing IP address ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.287 [INFO][5691] ipam_plugin.go 417: Releasing address using handleID ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.287 [INFO][5691] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.288 [INFO][5691] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.310 [WARNING][5691] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.310 [INFO][5691] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" HandleID="k8s-pod-network.f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Workload="ip--172--31--30--239-k8s-coredns--5dd5756b68--lt6bq-eth0" Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.314 [INFO][5691] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:46.331263 containerd[2019]: 2024-09-04 17:12:46.323 [INFO][5675] k8s.go 621: Teardown processing complete. ContainerID="f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66" Sep 4 17:12:46.335060 containerd[2019]: time="2024-09-04T17:12:46.333699368Z" level=info msg="TearDown network for sandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" successfully" Sep 4 17:12:46.348232 containerd[2019]: time="2024-09-04T17:12:46.347780756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:12:46.348530 containerd[2019]: time="2024-09-04T17:12:46.348486596Z" level=info msg="RemovePodSandbox \"f449c79d336da4307c5be0b3dc6f1e770b214b2864cf54bd6529bda540d5ad66\" returns successfully" Sep 4 17:12:46.801478 containerd[2019]: time="2024-09-04T17:12:46.801283186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:46.804631 containerd[2019]: time="2024-09-04T17:12:46.803722978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:12:46.807335 containerd[2019]: time="2024-09-04T17:12:46.805656838Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:46.815023 containerd[2019]: time="2024-09-04T17:12:46.813740794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:46.820088 containerd[2019]: time="2024-09-04T17:12:46.820017838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 2.948856686s" Sep 4 17:12:46.820284 containerd[2019]: time="2024-09-04T17:12:46.820247938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:12:46.826752 containerd[2019]: time="2024-09-04T17:12:46.826674298Z" level=info msg="CreateContainer within sandbox \"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:12:46.863986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771442783.mount: Deactivated successfully. Sep 4 17:12:46.872105 containerd[2019]: time="2024-09-04T17:12:46.870574115Z" level=info msg="CreateContainer within sandbox \"2d366893672b76ba474582fa85a6af0da046b3ef53f7160bc6d34ed497ae4ac3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4b1881ef3a76d363bda0d1c7e11782c5e13b281f259e691e17de468cd2265a6a\"" Sep 4 17:12:46.874610 containerd[2019]: time="2024-09-04T17:12:46.873015491Z" level=info msg="StartContainer for \"4b1881ef3a76d363bda0d1c7e11782c5e13b281f259e691e17de468cd2265a6a\"" Sep 4 17:12:46.954534 systemd[1]: Started cri-containerd-4b1881ef3a76d363bda0d1c7e11782c5e13b281f259e691e17de468cd2265a6a.scope - libcontainer container 4b1881ef3a76d363bda0d1c7e11782c5e13b281f259e691e17de468cd2265a6a. Sep 4 17:12:47.046334 containerd[2019]: time="2024-09-04T17:12:47.046246603Z" level=info msg="StartContainer for \"4b1881ef3a76d363bda0d1c7e11782c5e13b281f259e691e17de468cd2265a6a\" returns successfully" Sep 4 17:12:47.575197 kubelet[3211]: I0904 17:12:47.575136 3211 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:12:47.575197 kubelet[3211]: I0904 17:12:47.575205 3211 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:12:47.900350 kubelet[3211]: I0904 17:12:47.900289 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-sdjd2" podStartSLOduration=34.265390632 podCreationTimestamp="2024-09-04 17:12:04 +0000 UTC" firstStartedPulling="2024-09-04 17:12:37.18591205 +0000 UTC m=+54.106340525" lastFinishedPulling="2024-09-04 17:12:46.820754146 +0000 UTC m=+63.741182621" observedRunningTime="2024-09-04 17:12:47.899834292 +0000 UTC m=+64.820262803" watchObservedRunningTime="2024-09-04 17:12:47.900232728 +0000 UTC m=+64.820661239" Sep 4 17:12:50.742137 systemd[1]: Started sshd@13-172.31.30.239:22-139.178.89.65:41140.service - OpenSSH per-connection server daemon (139.178.89.65:41140). Sep 4 17:12:50.921030 sshd[5740]: Accepted publickey for core from 139.178.89.65 port 41140 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:50.923746 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:50.932888 systemd-logind[1996]: New session 14 of user core. Sep 4 17:12:50.937874 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:12:51.190631 sshd[5740]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:51.197009 systemd[1]: sshd@13-172.31.30.239:22-139.178.89.65:41140.service: Deactivated successfully. Sep 4 17:12:51.201999 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:12:51.204288 systemd-logind[1996]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:12:51.206453 systemd-logind[1996]: Removed session 14. Sep 4 17:12:56.231119 systemd[1]: Started sshd@14-172.31.30.239:22-139.178.89.65:41154.service - OpenSSH per-connection server daemon (139.178.89.65:41154). Sep 4 17:12:56.414355 sshd[5806]: Accepted publickey for core from 139.178.89.65 port 41154 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:56.415542 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:56.426791 systemd-logind[1996]: New session 15 of user core. Sep 4 17:12:56.435178 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:12:56.758281 sshd[5806]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:56.766760 systemd-logind[1996]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:12:56.769374 systemd[1]: sshd@14-172.31.30.239:22-139.178.89.65:41154.service: Deactivated successfully. Sep 4 17:12:56.778960 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:12:56.798366 systemd-logind[1996]: Removed session 15. Sep 4 17:12:56.807253 systemd[1]: Started sshd@15-172.31.30.239:22-139.178.89.65:41164.service - OpenSSH per-connection server daemon (139.178.89.65:41164). Sep 4 17:12:56.995358 sshd[5819]: Accepted publickey for core from 139.178.89.65 port 41164 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:56.998093 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:57.006793 systemd-logind[1996]: New session 16 of user core. Sep 4 17:12:57.012860 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:12:57.514120 sshd[5819]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:57.520513 systemd[1]: sshd@15-172.31.30.239:22-139.178.89.65:41164.service: Deactivated successfully. Sep 4 17:12:57.525976 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:12:57.527706 systemd-logind[1996]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:12:57.530686 systemd-logind[1996]: Removed session 16. Sep 4 17:12:57.551199 systemd[1]: Started sshd@16-172.31.30.239:22-139.178.89.65:41180.service - OpenSSH per-connection server daemon (139.178.89.65:41180). Sep 4 17:12:57.741092 sshd[5833]: Accepted publickey for core from 139.178.89.65 port 41180 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:57.744411 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:57.754402 systemd-logind[1996]: New session 17 of user core. Sep 4 17:12:57.764310 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:12:59.152970 sshd[5833]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:59.162538 systemd[1]: sshd@16-172.31.30.239:22-139.178.89.65:41180.service: Deactivated successfully. Sep 4 17:12:59.169512 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:12:59.170285 systemd[1]: session-17.scope: Consumed 1.063s CPU time. Sep 4 17:12:59.176842 systemd-logind[1996]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:12:59.203133 systemd[1]: Started sshd@17-172.31.30.239:22-139.178.89.65:49466.service - OpenSSH per-connection server daemon (139.178.89.65:49466). Sep 4 17:12:59.208431 systemd-logind[1996]: Removed session 17. Sep 4 17:12:59.395209 sshd[5849]: Accepted publickey for core from 139.178.89.65 port 49466 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:59.398874 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:59.407696 systemd-logind[1996]: New session 18 of user core. Sep 4 17:12:59.412939 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:13:00.116889 sshd[5849]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:00.123563 systemd[1]: sshd@17-172.31.30.239:22-139.178.89.65:49466.service: Deactivated successfully. Sep 4 17:13:00.128892 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:13:00.132222 systemd-logind[1996]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:13:00.134264 systemd-logind[1996]: Removed session 18. Sep 4 17:13:00.159129 systemd[1]: Started sshd@18-172.31.30.239:22-139.178.89.65:49476.service - OpenSSH per-connection server daemon (139.178.89.65:49476). Sep 4 17:13:00.337500 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 49476 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:00.340171 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:00.348805 systemd-logind[1996]: New session 19 of user core. Sep 4 17:13:00.364877 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:13:00.603944 sshd[5863]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:00.610132 systemd[1]: sshd@18-172.31.30.239:22-139.178.89.65:49476.service: Deactivated successfully. Sep 4 17:13:00.615037 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:13:00.616625 systemd-logind[1996]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:13:00.619770 systemd-logind[1996]: Removed session 19. Sep 4 17:13:05.643785 systemd[1]: Started sshd@19-172.31.30.239:22-139.178.89.65:49478.service - OpenSSH per-connection server daemon (139.178.89.65:49478). Sep 4 17:13:05.822135 sshd[5881]: Accepted publickey for core from 139.178.89.65 port 49478 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:05.824876 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:05.833960 systemd-logind[1996]: New session 20 of user core. Sep 4 17:13:05.839869 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:13:06.136913 sshd[5881]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:06.145065 systemd[1]: sshd@19-172.31.30.239:22-139.178.89.65:49478.service: Deactivated successfully. Sep 4 17:13:06.150227 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:13:06.156897 systemd-logind[1996]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:13:06.160519 systemd-logind[1996]: Removed session 20. Sep 4 17:13:11.178153 systemd[1]: Started sshd@20-172.31.30.239:22-139.178.89.65:56502.service - OpenSSH per-connection server daemon (139.178.89.65:56502). Sep 4 17:13:11.364705 sshd[5898]: Accepted publickey for core from 139.178.89.65 port 56502 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:11.366097 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:11.379768 systemd-logind[1996]: New session 21 of user core. Sep 4 17:13:11.387773 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:13:11.665137 sshd[5898]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:11.677478 systemd[1]: sshd@20-172.31.30.239:22-139.178.89.65:56502.service: Deactivated successfully. Sep 4 17:13:11.678090 systemd-logind[1996]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:13:11.688133 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:13:11.692863 systemd-logind[1996]: Removed session 21. Sep 4 17:13:12.550908 kubelet[3211]: I0904 17:13:12.550839 3211 topology_manager.go:215] "Topology Admit Handler" podUID="e00b6ee5-0595-4677-ad98-f35ee185f79a" podNamespace="calico-apiserver" podName="calico-apiserver-574678cc54-t8z5f" Sep 4 17:13:12.569254 systemd[1]: Created slice kubepods-besteffort-pode00b6ee5_0595_4677_ad98_f35ee185f79a.slice - libcontainer container kubepods-besteffort-pode00b6ee5_0595_4677_ad98_f35ee185f79a.slice. Sep 4 17:13:12.576952 kubelet[3211]: W0904 17:13:12.576890 3211 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-30-239" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-239' and this object Sep 4 17:13:12.576952 kubelet[3211]: E0904 17:13:12.576954 3211 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-30-239" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-239' and this object Sep 4 17:13:12.577172 kubelet[3211]: W0904 17:13:12.577049 3211 reflector.go:535] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-239" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-239' and this object Sep 4 17:13:12.577172 kubelet[3211]: E0904 17:13:12.577076 3211 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-239" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-239' and this object Sep 4 17:13:12.733499 kubelet[3211]: I0904 17:13:12.732931 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp4vx\" (UniqueName: \"kubernetes.io/projected/e00b6ee5-0595-4677-ad98-f35ee185f79a-kube-api-access-xp4vx\") pod \"calico-apiserver-574678cc54-t8z5f\" (UID: \"e00b6ee5-0595-4677-ad98-f35ee185f79a\") " pod="calico-apiserver/calico-apiserver-574678cc54-t8z5f" Sep 4 17:13:12.733926 kubelet[3211]: I0904 17:13:12.733825 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e00b6ee5-0595-4677-ad98-f35ee185f79a-calico-apiserver-certs\") pod \"calico-apiserver-574678cc54-t8z5f\" (UID: \"e00b6ee5-0595-4677-ad98-f35ee185f79a\") " pod="calico-apiserver/calico-apiserver-574678cc54-t8z5f" Sep 4 17:13:13.836842 kubelet[3211]: E0904 17:13:13.836555 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:13.836842 kubelet[3211]: E0904 17:13:13.836723 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00b6ee5-0595-4677-ad98-f35ee185f79a-calico-apiserver-certs podName:e00b6ee5-0595-4677-ad98-f35ee185f79a nodeName:}" failed. No retries permitted until 2024-09-04 17:13:14.336675353 +0000 UTC m=+91.257103828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e00b6ee5-0595-4677-ad98-f35ee185f79a-calico-apiserver-certs") pod "calico-apiserver-574678cc54-t8z5f" (UID: "e00b6ee5-0595-4677-ad98-f35ee185f79a") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:13:14.380595 containerd[2019]: time="2024-09-04T17:13:14.380513387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574678cc54-t8z5f,Uid:e00b6ee5-0595-4677-ad98-f35ee185f79a,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:13:14.665443 systemd-networkd[1895]: cali8d515f524f8: Link UP Sep 4 17:13:14.667785 systemd-networkd[1895]: cali8d515f524f8: Gained carrier Sep 4 17:13:14.668303 (udev-worker)[5936]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.499 [INFO][5917] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0 calico-apiserver-574678cc54- calico-apiserver e00b6ee5-0595-4677-ad98-f35ee185f79a 1089 0 2024-09-04 17:13:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574678cc54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-239 calico-apiserver-574678cc54-t8z5f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8d515f524f8 [] []}} ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.499 [INFO][5917] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.581 [INFO][5928] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" HandleID="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Workload="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.606 [INFO][5928] ipam_plugin.go 270: Auto assigning IP ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" HandleID="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Workload="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebe20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-239", "pod":"calico-apiserver-574678cc54-t8z5f", "timestamp":"2024-09-04 17:13:14.581254308 +0000 UTC"}, Hostname:"ip-172-31-30-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.607 [INFO][5928] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.607 [INFO][5928] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.607 [INFO][5928] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-239' Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.610 [INFO][5928] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.617 [INFO][5928] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.630 [INFO][5928] ipam.go 489: Trying affinity for 192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.633 [INFO][5928] ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.637 [INFO][5928] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.637 [INFO][5928] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.639 [INFO][5928] ipam.go 1685: Creating new handle: k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8 Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.645 [INFO][5928] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.654 [INFO][5928] ipam.go 1216: Successfully claimed IPs: [192.168.8.5/26] block=192.168.8.0/26 handle="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.654 [INFO][5928] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.5/26] handle="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" host="ip-172-31-30-239" Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.654 [INFO][5928] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:14.698826 containerd[2019]: 2024-09-04 17:13:14.654 [INFO][5928] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.5/26] IPv6=[] ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" HandleID="k8s-pod-network.8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Workload="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.700803 containerd[2019]: 2024-09-04 17:13:14.658 [INFO][5917] k8s.go 386: Populated endpoint ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0", GenerateName:"calico-apiserver-574678cc54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e00b6ee5-0595-4677-ad98-f35ee185f79a", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574678cc54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"", Pod:"calico-apiserver-574678cc54-t8z5f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d515f524f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:14.700803 containerd[2019]: 2024-09-04 17:13:14.658 [INFO][5917] k8s.go 387: Calico CNI using IPs: [192.168.8.5/32] ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.700803 containerd[2019]: 2024-09-04 17:13:14.658 [INFO][5917] dataplane_linux.go 68: Setting the host side veth name to cali8d515f524f8 ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.700803 containerd[2019]: 2024-09-04 17:13:14.667 [INFO][5917] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.700803 containerd[2019]: 2024-09-04 17:13:14.669 [INFO][5917] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0", GenerateName:"calico-apiserver-574678cc54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e00b6ee5-0595-4677-ad98-f35ee185f79a", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574678cc54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-239", ContainerID:"8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8", Pod:"calico-apiserver-574678cc54-t8z5f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d515f524f8", MAC:"fe:d1:4a:7b:d7:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:14.700803 containerd[2019]: 2024-09-04 17:13:14.683 [INFO][5917] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8" Namespace="calico-apiserver" Pod="calico-apiserver-574678cc54-t8z5f" WorkloadEndpoint="ip--172--31--30--239-k8s-calico--apiserver--574678cc54--t8z5f-eth0" Sep 4 17:13:14.751032 containerd[2019]: time="2024-09-04T17:13:14.750107461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:14.751184 containerd[2019]: time="2024-09-04T17:13:14.751078549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:14.751454 containerd[2019]: time="2024-09-04T17:13:14.751244593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:14.751675 containerd[2019]: time="2024-09-04T17:13:14.751337917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:14.807537 systemd[1]: Started cri-containerd-8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8.scope - libcontainer container 8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8. Sep 4 17:13:14.918262 containerd[2019]: time="2024-09-04T17:13:14.917711174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574678cc54-t8z5f,Uid:e00b6ee5-0595-4677-ad98-f35ee185f79a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8\"" Sep 4 17:13:14.922237 containerd[2019]: time="2024-09-04T17:13:14.921670418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:13:15.990779 systemd-networkd[1895]: cali8d515f524f8: Gained IPv6LL Sep 4 17:13:16.714540 systemd[1]: Started sshd@21-172.31.30.239:22-139.178.89.65:56506.service - OpenSSH per-connection server daemon (139.178.89.65:56506). Sep 4 17:13:16.924957 sshd[6008]: Accepted publickey for core from 139.178.89.65 port 56506 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:16.930069 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:16.946541 systemd-logind[1996]: New session 22 of user core. Sep 4 17:13:16.958837 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:13:17.344168 sshd[6008]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:17.355144 systemd[1]: sshd@21-172.31.30.239:22-139.178.89.65:56506.service: Deactivated successfully. Sep 4 17:13:17.361931 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:13:17.366302 systemd-logind[1996]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:13:17.370301 systemd-logind[1996]: Removed session 22. Sep 4 17:13:17.824315 containerd[2019]: time="2024-09-04T17:13:17.824231500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:17.827855 containerd[2019]: time="2024-09-04T17:13:17.827779288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:13:17.832849 containerd[2019]: time="2024-09-04T17:13:17.828975196Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:17.839001 containerd[2019]: time="2024-09-04T17:13:17.838934548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:17.841350 containerd[2019]: time="2024-09-04T17:13:17.841258492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.919495962s" Sep 4 17:13:17.841599 containerd[2019]: time="2024-09-04T17:13:17.841547932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:13:17.848450 containerd[2019]: time="2024-09-04T17:13:17.848363884Z" level=info msg="CreateContainer within sandbox \"8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:13:17.880644 containerd[2019]: time="2024-09-04T17:13:17.879981713Z" level=info msg="CreateContainer within sandbox \"8caf73102dad0abb26538864d4de5377a459ef096a3937972ef0ab7f63dde9e8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3cdd994c261c7bba7f89e2b802f80b00ecf72373b2df54276b942b5ebd503086\"" Sep 4 17:13:17.885948 containerd[2019]: time="2024-09-04T17:13:17.885870941Z" level=info msg="StartContainer for \"3cdd994c261c7bba7f89e2b802f80b00ecf72373b2df54276b942b5ebd503086\"" Sep 4 17:13:17.980986 systemd[1]: Started cri-containerd-3cdd994c261c7bba7f89e2b802f80b00ecf72373b2df54276b942b5ebd503086.scope - libcontainer container 3cdd994c261c7bba7f89e2b802f80b00ecf72373b2df54276b942b5ebd503086. Sep 4 17:13:18.005681 ntpd[1988]: Listen normally on 14 cali8d515f524f8 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:13:18.006261 ntpd[1988]: 4 Sep 17:13:18 ntpd[1988]: Listen normally on 14 cali8d515f524f8 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:13:18.100501 containerd[2019]: time="2024-09-04T17:13:18.100324826Z" level=info msg="StartContainer for \"3cdd994c261c7bba7f89e2b802f80b00ecf72373b2df54276b942b5ebd503086\" returns successfully" Sep 4 17:13:19.019364 kubelet[3211]: I0904 17:13:19.019308 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-574678cc54-t8z5f" podStartSLOduration=4.097078056 podCreationTimestamp="2024-09-04 17:13:12 +0000 UTC" firstStartedPulling="2024-09-04 17:13:14.920365226 +0000 UTC m=+91.840793701" lastFinishedPulling="2024-09-04 17:13:17.842530336 +0000 UTC m=+94.762958823" observedRunningTime="2024-09-04 17:13:19.019143674 +0000 UTC m=+95.939572161" watchObservedRunningTime="2024-09-04 17:13:19.019243178 +0000 UTC m=+95.939671665" Sep 4 17:13:22.387258 systemd[1]: Started sshd@22-172.31.30.239:22-139.178.89.65:46958.service - OpenSSH per-connection server daemon (139.178.89.65:46958). Sep 4 17:13:22.562570 sshd[6107]: Accepted publickey for core from 139.178.89.65 port 46958 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:22.565633 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:22.573959 systemd-logind[1996]: New session 23 of user core. Sep 4 17:13:22.582850 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:13:22.819439 sshd[6107]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:22.826077 systemd[1]: sshd@22-172.31.30.239:22-139.178.89.65:46958.service: Deactivated successfully. Sep 4 17:13:22.831772 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:13:22.833870 systemd-logind[1996]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:13:22.835836 systemd-logind[1996]: Removed session 23. Sep 4 17:13:27.864106 systemd[1]: Started sshd@23-172.31.30.239:22-139.178.89.65:59390.service - OpenSSH per-connection server daemon (139.178.89.65:59390). Sep 4 17:13:28.046274 sshd[6157]: Accepted publickey for core from 139.178.89.65 port 59390 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:28.051176 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:28.062556 systemd-logind[1996]: New session 24 of user core. Sep 4 17:13:28.071866 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:13:28.308729 sshd[6157]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:28.315498 systemd[1]: sshd@23-172.31.30.239:22-139.178.89.65:59390.service: Deactivated successfully. Sep 4 17:13:28.320326 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:13:28.321764 systemd-logind[1996]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:13:28.323748 systemd-logind[1996]: Removed session 24. Sep 4 17:13:33.353030 systemd[1]: Started sshd@24-172.31.30.239:22-139.178.89.65:59406.service - OpenSSH per-connection server daemon (139.178.89.65:59406). Sep 4 17:13:33.539623 sshd[6171]: Accepted publickey for core from 139.178.89.65 port 59406 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:33.542187 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:33.549736 systemd-logind[1996]: New session 25 of user core. Sep 4 17:13:33.557876 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:13:33.805978 sshd[6171]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:33.812447 systemd[1]: sshd@24-172.31.30.239:22-139.178.89.65:59406.service: Deactivated successfully. Sep 4 17:13:33.817509 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:13:33.819197 systemd-logind[1996]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:13:33.821731 systemd-logind[1996]: Removed session 25. Sep 4 17:14:19.243841 systemd[1]: cri-containerd-e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849.scope: Deactivated successfully. Sep 4 17:14:19.244733 systemd[1]: cri-containerd-e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849.scope: Consumed 4.960s CPU time, 22.3M memory peak, 0B memory swap peak. Sep 4 17:14:19.303490 containerd[2019]: time="2024-09-04T17:14:19.302065814Z" level=info msg="shim disconnected" id=e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849 namespace=k8s.io Sep 4 17:14:19.303490 containerd[2019]: time="2024-09-04T17:14:19.302311262Z" level=warning msg="cleaning up after shim disconnected" id=e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849 namespace=k8s.io Sep 4 17:14:19.303490 containerd[2019]: time="2024-09-04T17:14:19.302341226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:19.306870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849-rootfs.mount: Deactivated successfully. Sep 4 17:14:20.170738 kubelet[3211]: I0904 17:14:20.169770 3211 scope.go:117] "RemoveContainer" containerID="e4bcd4e99b45cdfc0298929fc5420336e731bc543a1ee0d755cef6cd7b778849" Sep 4 17:14:20.174372 containerd[2019]: time="2024-09-04T17:14:20.174287894Z" level=info msg="CreateContainer within sandbox \"3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:14:20.200609 containerd[2019]: time="2024-09-04T17:14:20.200490530Z" level=info msg="CreateContainer within sandbox \"3520f17911940291f74b7bc2a86a90ecb1f8ce3405b9eb525ed351ebb32c7a47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"32324fd61add44e9d34f1a043b4fdfc983785e8ca2c0a5f0852157f7e32ed96a\"" Sep 4 17:14:20.203975 containerd[2019]: time="2024-09-04T17:14:20.203909906Z" level=info msg="StartContainer for \"32324fd61add44e9d34f1a043b4fdfc983785e8ca2c0a5f0852157f7e32ed96a\"" Sep 4 17:14:20.210094 systemd[1]: cri-containerd-54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81.scope: Deactivated successfully. Sep 4 17:14:20.212794 systemd[1]: cri-containerd-54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81.scope: Consumed 10.764s CPU time. Sep 4 17:14:20.273066 systemd[1]: Started cri-containerd-32324fd61add44e9d34f1a043b4fdfc983785e8ca2c0a5f0852157f7e32ed96a.scope - libcontainer container 32324fd61add44e9d34f1a043b4fdfc983785e8ca2c0a5f0852157f7e32ed96a. Sep 4 17:14:20.291539 containerd[2019]: time="2024-09-04T17:14:20.291376359Z" level=info msg="shim disconnected" id=54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81 namespace=k8s.io Sep 4 17:14:20.292650 containerd[2019]: time="2024-09-04T17:14:20.291554967Z" level=warning msg="cleaning up after shim disconnected" id=54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81 namespace=k8s.io Sep 4 17:14:20.292650 containerd[2019]: time="2024-09-04T17:14:20.291678495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:20.305755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81-rootfs.mount: Deactivated successfully. Sep 4 17:14:20.331552 containerd[2019]: time="2024-09-04T17:14:20.331445907Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:14:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:14:20.435975 containerd[2019]: time="2024-09-04T17:14:20.435089199Z" level=info msg="StartContainer for \"32324fd61add44e9d34f1a043b4fdfc983785e8ca2c0a5f0852157f7e32ed96a\" returns successfully" Sep 4 17:14:21.185794 kubelet[3211]: I0904 17:14:21.185743 3211 scope.go:117] "RemoveContainer" containerID="54ab11d6f0edc7825aa4636729b12413fca51c840d8f4079b516775ba51fcf81" Sep 4 17:14:21.198606 containerd[2019]: time="2024-09-04T17:14:21.196553499Z" level=info msg="CreateContainer within sandbox \"7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:14:21.229939 containerd[2019]: time="2024-09-04T17:14:21.229698015Z" level=info msg="CreateContainer within sandbox \"7ac9634aa0737728f96816800ca2f1fb285d500913ded8efb00377ebff86296a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"282684e0719f0a35b2a489072ef060cec97ae50770ffce9a7b421e4f1e2f193b\"" Sep 4 17:14:21.230709 containerd[2019]: time="2024-09-04T17:14:21.230572899Z" level=info msg="StartContainer for \"282684e0719f0a35b2a489072ef060cec97ae50770ffce9a7b421e4f1e2f193b\"" Sep 4 17:14:21.302789 systemd[1]: Started cri-containerd-282684e0719f0a35b2a489072ef060cec97ae50770ffce9a7b421e4f1e2f193b.scope - libcontainer container 282684e0719f0a35b2a489072ef060cec97ae50770ffce9a7b421e4f1e2f193b. Sep 4 17:14:21.364451 containerd[2019]: time="2024-09-04T17:14:21.364383400Z" level=info msg="StartContainer for \"282684e0719f0a35b2a489072ef060cec97ae50770ffce9a7b421e4f1e2f193b\" returns successfully" Sep 4 17:14:24.875284 systemd[1]: cri-containerd-9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210.scope: Deactivated successfully. Sep 4 17:14:24.876898 systemd[1]: cri-containerd-9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210.scope: Consumed 4.064s CPU time, 16.1M memory peak, 0B memory swap peak. Sep 4 17:14:24.913205 containerd[2019]: time="2024-09-04T17:14:24.913129462Z" level=info msg="shim disconnected" id=9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210 namespace=k8s.io Sep 4 17:14:24.914195 containerd[2019]: time="2024-09-04T17:14:24.914003266Z" level=warning msg="cleaning up after shim disconnected" id=9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210 namespace=k8s.io Sep 4 17:14:24.914615 containerd[2019]: time="2024-09-04T17:14:24.914418010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:24.924840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210-rootfs.mount: Deactivated successfully. Sep 4 17:14:25.235813 kubelet[3211]: I0904 17:14:25.235248 3211 scope.go:117] "RemoveContainer" containerID="9826e7f5140a00c80dcb9ef37fb731b7b9140948eef367b2b5662bb582f9d210" Sep 4 17:14:25.240328 containerd[2019]: time="2024-09-04T17:14:25.240215611Z" level=info msg="CreateContainer within sandbox \"612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:14:25.271944 containerd[2019]: time="2024-09-04T17:14:25.271134055Z" level=info msg="CreateContainer within sandbox \"612daf87904aca33a642792a96e9aede246a45f5830c9cf393a4c1363bb7b27c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"33c2a8f96aa09742e59cd6e6289a8eb9fd60a2398051501014b17d621e199de0\"" Sep 4 17:14:25.274413 containerd[2019]: time="2024-09-04T17:14:25.274358959Z" level=info msg="StartContainer for \"33c2a8f96aa09742e59cd6e6289a8eb9fd60a2398051501014b17d621e199de0\"" Sep 4 17:14:25.359239 systemd[1]: Started cri-containerd-33c2a8f96aa09742e59cd6e6289a8eb9fd60a2398051501014b17d621e199de0.scope - libcontainer container 33c2a8f96aa09742e59cd6e6289a8eb9fd60a2398051501014b17d621e199de0. Sep 4 17:14:25.436146 containerd[2019]: time="2024-09-04T17:14:25.436070024Z" level=info msg="StartContainer for \"33c2a8f96aa09742e59cd6e6289a8eb9fd60a2398051501014b17d621e199de0\" returns successfully" Sep 4 17:14:26.516631 kubelet[3211]: E0904 17:14:26.515692 3211 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-239?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:14:36.517335 kubelet[3211]: E0904 17:14:36.516564 3211 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-239?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"