Jul 2 08:57:03.180740 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 08:57:03.180810 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:57:03.180838 kernel: KASLR disabled due to lack of seed Jul 2 08:57:03.180855 kernel: efi: EFI v2.7 by EDK II Jul 2 08:57:03.180871 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 08:57:03.180886 kernel: ACPI: Early table checksum verification disabled Jul 2 08:57:03.180904 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 08:57:03.180919 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 08:57:03.180935 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 08:57:03.180950 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 08:57:03.180971 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 08:57:03.180987 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 08:57:03.181002 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 08:57:03.181017 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 08:57:03.181036 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 08:57:03.181058 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 08:57:03.181075 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 08:57:03.181091 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 08:57:03.181107 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 08:57:03.181123 kernel: printk: bootconsole [uart0] enabled Jul 2 08:57:03.181140 kernel: NUMA: Failed to initialise from firmware Jul 2 08:57:03.181156 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:57:03.181173 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 08:57:03.181189 kernel: Zone ranges: Jul 2 08:57:03.181205 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 08:57:03.181221 kernel: DMA32 empty Jul 2 08:57:03.181242 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 08:57:03.181259 kernel: Movable zone start for each node Jul 2 08:57:03.181275 kernel: Early memory node ranges Jul 2 08:57:03.181291 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 08:57:03.181307 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 08:57:03.181323 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 08:57:03.181340 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 08:57:03.181356 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 08:57:03.181372 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 08:57:03.181388 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 08:57:03.181405 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 08:57:03.181421 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:57:03.181442 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 08:57:03.181459 kernel: psci: probing for conduit method from ACPI. Jul 2 08:57:03.181483 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 08:57:03.181500 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:57:03.181518 kernel: psci: Trusted OS migration not required Jul 2 08:57:03.181540 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:57:03.181557 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:57:03.181574 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:57:03.181592 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:57:03.181609 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:57:03.181626 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:57:03.181643 kernel: CPU features: detected: Spectre-v2 Jul 2 08:57:03.181660 kernel: CPU features: detected: Spectre-v3a Jul 2 08:57:03.181677 kernel: CPU features: detected: Spectre-BHB Jul 2 08:57:03.181695 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 08:57:03.181712 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 08:57:03.181734 kernel: alternatives: applying boot alternatives Jul 2 08:57:03.181754 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:57:03.183813 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:57:03.183863 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:57:03.183882 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:57:03.183900 kernel: Fallback order for Node 0: 0 Jul 2 08:57:03.183917 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 08:57:03.183934 kernel: Policy zone: Normal Jul 2 08:57:03.183952 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:57:03.183969 kernel: software IO TLB: area num 2. Jul 2 08:57:03.183986 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 08:57:03.184014 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 08:57:03.184032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:57:03.184050 kernel: trace event string verifier disabled Jul 2 08:57:03.184067 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:57:03.184086 kernel: rcu: RCU event tracing is enabled. Jul 2 08:57:03.184103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:57:03.184121 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:57:03.184139 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:57:03.184156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:57:03.184174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:57:03.184191 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:57:03.184213 kernel: GICv3: 96 SPIs implemented Jul 2 08:57:03.184230 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:57:03.184247 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:57:03.184281 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 08:57:03.184304 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 08:57:03.184322 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 08:57:03.184339 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:57:03.184357 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:57:03.184375 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 08:57:03.184392 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 08:57:03.184410 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 08:57:03.184427 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:57:03.184451 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 08:57:03.184469 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 08:57:03.184486 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 08:57:03.184504 kernel: Console: colour dummy device 80x25 Jul 2 08:57:03.184522 kernel: printk: console [tty1] enabled Jul 2 08:57:03.184540 kernel: ACPI: Core revision 20230628 Jul 2 08:57:03.184558 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 08:57:03.184576 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:57:03.184594 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:57:03.184611 kernel: SELinux: Initializing. Jul 2 08:57:03.184634 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:57:03.184652 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:57:03.184669 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:57:03.184687 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:57:03.184705 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:57:03.184722 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:57:03.184740 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 08:57:03.184757 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 08:57:03.184810 kernel: Remapping and enabling EFI services. Jul 2 08:57:03.184837 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:57:03.184855 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:57:03.184872 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 08:57:03.184890 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 08:57:03.184908 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 08:57:03.184925 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:57:03.184942 kernel: SMP: Total of 2 processors activated. Jul 2 08:57:03.184960 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:57:03.184977 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 08:57:03.185000 kernel: CPU features: detected: CRC32 instructions Jul 2 08:57:03.185018 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:57:03.185048 kernel: alternatives: applying system-wide alternatives Jul 2 08:57:03.185071 kernel: devtmpfs: initialized Jul 2 08:57:03.185090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:57:03.185108 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:57:03.185126 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:57:03.185144 kernel: SMBIOS 3.0.0 present. Jul 2 08:57:03.185163 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 08:57:03.185186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:57:03.185204 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:57:03.185223 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:57:03.185241 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:57:03.185260 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:57:03.185279 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Jul 2 08:57:03.185297 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:57:03.185320 kernel: cpuidle: using governor menu Jul 2 08:57:03.185339 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:57:03.185357 kernel: ASID allocator initialised with 65536 entries Jul 2 08:57:03.185376 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:57:03.185394 kernel: Serial: AMBA PL011 UART driver Jul 2 08:57:03.185412 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 08:57:03.185431 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:57:03.185449 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:57:03.185467 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:57:03.185490 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:57:03.185509 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:57:03.185539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:57:03.185564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:57:03.185583 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:57:03.185601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:57:03.185619 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:57:03.185637 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:57:03.185656 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:57:03.185680 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:57:03.185699 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:57:03.185717 kernel: ACPI: Interpreter enabled Jul 2 08:57:03.185735 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:57:03.185753 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:57:03.185786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 08:57:03.186162 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:57:03.186417 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:57:03.186630 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:57:03.190225 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 08:57:03.190472 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 08:57:03.190499 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 08:57:03.190519 kernel: acpiphp: Slot [1] registered Jul 2 08:57:03.190537 kernel: acpiphp: Slot [2] registered Jul 2 08:57:03.190556 kernel: acpiphp: Slot [3] registered Jul 2 08:57:03.190575 kernel: acpiphp: Slot [4] registered Jul 2 08:57:03.190593 kernel: acpiphp: Slot [5] registered Jul 2 08:57:03.190622 kernel: acpiphp: Slot [6] registered Jul 2 08:57:03.190642 kernel: acpiphp: Slot [7] registered Jul 2 08:57:03.190664 kernel: acpiphp: Slot [8] registered Jul 2 08:57:03.190683 kernel: acpiphp: Slot [9] registered Jul 2 08:57:03.190701 kernel: acpiphp: Slot [10] registered Jul 2 08:57:03.190719 kernel: acpiphp: Slot [11] registered Jul 2 08:57:03.190737 kernel: acpiphp: Slot [12] registered Jul 2 08:57:03.190755 kernel: acpiphp: Slot [13] registered Jul 2 08:57:03.190790 kernel: acpiphp: Slot [14] registered Jul 2 08:57:03.190821 kernel: acpiphp: Slot [15] registered Jul 2 08:57:03.190840 kernel: acpiphp: Slot [16] registered Jul 2 08:57:03.190859 kernel: acpiphp: Slot [17] registered Jul 2 08:57:03.190877 kernel: acpiphp: Slot [18] registered Jul 2 08:57:03.190896 kernel: acpiphp: Slot [19] registered Jul 2 08:57:03.190914 kernel: acpiphp: Slot [20] registered Jul 2 08:57:03.190932 kernel: acpiphp: Slot [21] registered Jul 2 08:57:03.190950 kernel: acpiphp: Slot [22] registered Jul 2 08:57:03.190968 kernel: acpiphp: Slot [23] registered Jul 2 08:57:03.190987 kernel: acpiphp: Slot [24] registered Jul 2 08:57:03.191010 kernel: acpiphp: Slot [25] registered Jul 2 08:57:03.191029 kernel: acpiphp: Slot [26] registered Jul 2 08:57:03.191047 kernel: acpiphp: Slot [27] registered Jul 2 08:57:03.191065 kernel: acpiphp: Slot [28] registered Jul 2 08:57:03.191084 kernel: acpiphp: Slot [29] registered Jul 2 08:57:03.191102 kernel: acpiphp: Slot [30] registered Jul 2 08:57:03.191120 kernel: acpiphp: Slot [31] registered Jul 2 08:57:03.191138 kernel: PCI host bridge to bus 0000:00 Jul 2 08:57:03.191359 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 08:57:03.191554 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:57:03.191740 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 08:57:03.191966 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 08:57:03.192212 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 08:57:03.192464 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 08:57:03.192679 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 08:57:03.192950 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 08:57:03.193163 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 08:57:03.193370 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:57:03.193598 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 08:57:03.198704 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 08:57:03.199065 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 08:57:03.199276 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 08:57:03.199491 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:57:03.199693 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 08:57:03.199948 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 08:57:03.200158 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 08:57:03.200387 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 08:57:03.200605 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 08:57:03.204759 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 08:57:03.205043 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:57:03.205239 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 08:57:03.205267 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:57:03.205287 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:57:03.205307 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:57:03.205326 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:57:03.205344 kernel: iommu: Default domain type: Translated Jul 2 08:57:03.205363 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:57:03.205388 kernel: efivars: Registered efivars operations Jul 2 08:57:03.205407 kernel: vgaarb: loaded Jul 2 08:57:03.205426 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:57:03.205445 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:57:03.205463 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:57:03.205482 kernel: pnp: PnP ACPI init Jul 2 08:57:03.205713 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 08:57:03.205742 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:57:03.205767 kernel: NET: Registered PF_INET protocol family Jul 2 08:57:03.205812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:57:03.205832 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:57:03.205851 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:57:03.205870 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:57:03.205888 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:57:03.205907 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:57:03.205925 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:57:03.205944 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:57:03.205969 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:57:03.205988 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:57:03.206007 kernel: kvm [1]: HYP mode not available Jul 2 08:57:03.206026 kernel: Initialise system trusted keyrings Jul 2 08:57:03.206044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:57:03.206064 kernel: Key type asymmetric registered Jul 2 08:57:03.206082 kernel: Asymmetric key parser 'x509' registered Jul 2 08:57:03.206101 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:57:03.206119 kernel: io scheduler mq-deadline registered Jul 2 08:57:03.206142 kernel: io scheduler kyber registered Jul 2 08:57:03.206163 kernel: io scheduler bfq registered Jul 2 08:57:03.206395 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 08:57:03.206424 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:57:03.206442 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:57:03.206461 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 08:57:03.206480 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 08:57:03.206498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:57:03.206524 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 08:57:03.206743 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 08:57:03.206770 kernel: printk: console [ttyS0] disabled Jul 2 08:57:03.207008 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 08:57:03.207028 kernel: printk: console [ttyS0] enabled Jul 2 08:57:03.207046 kernel: printk: bootconsole [uart0] disabled Jul 2 08:57:03.207065 kernel: thunder_xcv, ver 1.0 Jul 2 08:57:03.207084 kernel: thunder_bgx, ver 1.0 Jul 2 08:57:03.207102 kernel: nicpf, ver 1.0 Jul 2 08:57:03.207120 kernel: nicvf, ver 1.0 Jul 2 08:57:03.207363 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:57:03.207561 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:57:02 UTC (1719910622) Jul 2 08:57:03.207587 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:57:03.207606 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 08:57:03.207624 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:57:03.207643 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:57:03.207661 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:57:03.207679 kernel: Segment Routing with IPv6 Jul 2 08:57:03.207704 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:57:03.207722 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:57:03.207742 kernel: Key type dns_resolver registered Jul 2 08:57:03.207760 kernel: registered taskstats version 1 Jul 2 08:57:03.207943 kernel: Loading compiled-in X.509 certificates Jul 2 08:57:03.207963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:57:03.207981 kernel: Key type .fscrypt registered Jul 2 08:57:03.207999 kernel: Key type fscrypt-provisioning registered Jul 2 08:57:03.208018 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:57:03.208045 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:57:03.208063 kernel: ima: No architecture policies found Jul 2 08:57:03.208082 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:57:03.208100 kernel: clk: Disabling unused clocks Jul 2 08:57:03.208118 kernel: Freeing unused kernel memory: 39040K Jul 2 08:57:03.208158 kernel: Run /init as init process Jul 2 08:57:03.208179 kernel: with arguments: Jul 2 08:57:03.208199 kernel: /init Jul 2 08:57:03.208218 kernel: with environment: Jul 2 08:57:03.208244 kernel: HOME=/ Jul 2 08:57:03.208278 kernel: TERM=linux Jul 2 08:57:03.208304 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:57:03.208329 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:57:03.208355 systemd[1]: Detected virtualization amazon. Jul 2 08:57:03.208376 systemd[1]: Detected architecture arm64. Jul 2 08:57:03.208397 systemd[1]: Running in initrd. Jul 2 08:57:03.208418 systemd[1]: No hostname configured, using default hostname. Jul 2 08:57:03.208446 systemd[1]: Hostname set to . Jul 2 08:57:03.208468 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:57:03.208489 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:57:03.208510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:57:03.208531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:57:03.208554 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:57:03.208575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:57:03.208603 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:57:03.208624 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:57:03.208647 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:57:03.208668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:57:03.208688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:57:03.208708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:57:03.208728 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:57:03.208753 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:57:03.208799 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:57:03.208827 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:57:03.208848 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:57:03.208888 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:57:03.208910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:57:03.208932 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:57:03.208956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:57:03.208977 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:57:03.209005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:57:03.209025 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:57:03.209045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:57:03.209065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:57:03.209086 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:57:03.209105 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:57:03.209125 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:57:03.209145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:57:03.209171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:03.209192 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:57:03.209212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:57:03.209232 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:57:03.209301 systemd-journald[250]: Collecting audit messages is disabled. Jul 2 08:57:03.209351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:57:03.209372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:03.209392 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:57:03.209411 kernel: Bridge firewalling registered Jul 2 08:57:03.209436 systemd-journald[250]: Journal started Jul 2 08:57:03.209474 systemd-journald[250]: Runtime Journal (/run/log/journal/ec277be5cf53c260da118cead3744c4f) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:57:03.166846 systemd-modules-load[251]: Inserted module 'overlay' Jul 2 08:57:03.207811 systemd-modules-load[251]: Inserted module 'br_netfilter' Jul 2 08:57:03.222905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:57:03.231457 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:57:03.229435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:57:03.234487 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:57:03.251238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:57:03.257115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:57:03.269399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:57:03.277155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:03.286054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:57:03.304339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:57:03.307248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:57:03.332616 dracut-cmdline[280]: dracut-dracut-053 Jul 2 08:57:03.338650 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:57:03.361671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:57:03.380166 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:57:03.455864 systemd-resolved[307]: Positive Trust Anchors: Jul 2 08:57:03.455901 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:57:03.455964 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:57:03.497818 kernel: SCSI subsystem initialized Jul 2 08:57:03.505822 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:57:03.517808 kernel: iscsi: registered transport (tcp) Jul 2 08:57:03.541018 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:57:03.541093 kernel: QLogic iSCSI HBA Driver Jul 2 08:57:03.649808 kernel: random: crng init done Jul 2 08:57:03.650100 systemd-resolved[307]: Defaulting to hostname 'linux'. Jul 2 08:57:03.653478 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:57:03.657372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:57:03.680042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:57:03.694056 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:57:03.728929 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:57:03.729061 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:57:03.729092 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:57:03.795838 kernel: raid6: neonx8 gen() 6625 MB/s Jul 2 08:57:03.812829 kernel: raid6: neonx4 gen() 6430 MB/s Jul 2 08:57:03.829813 kernel: raid6: neonx2 gen() 5354 MB/s Jul 2 08:57:03.846807 kernel: raid6: neonx1 gen() 3928 MB/s Jul 2 08:57:03.863806 kernel: raid6: int64x8 gen() 3793 MB/s Jul 2 08:57:03.880807 kernel: raid6: int64x4 gen() 3676 MB/s Jul 2 08:57:03.897804 kernel: raid6: int64x2 gen() 3556 MB/s Jul 2 08:57:03.915540 kernel: raid6: int64x1 gen() 2764 MB/s Jul 2 08:57:03.915602 kernel: raid6: using algorithm neonx8 gen() 6625 MB/s Jul 2 08:57:03.933502 kernel: raid6: .... xor() 4914 MB/s, rmw enabled Jul 2 08:57:03.933560 kernel: raid6: using neon recovery algorithm Jul 2 08:57:03.941812 kernel: xor: measuring software checksum speed Jul 2 08:57:03.941871 kernel: 8regs : 11029 MB/sec Jul 2 08:57:03.944803 kernel: 32regs : 11922 MB/sec Jul 2 08:57:03.946950 kernel: arm64_neon : 9307 MB/sec Jul 2 08:57:03.946985 kernel: xor: using function: 32regs (11922 MB/sec) Jul 2 08:57:04.031827 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:57:04.051449 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:57:04.061108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:57:04.104469 systemd-udevd[468]: Using default interface naming scheme 'v255'. Jul 2 08:57:04.113376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:57:04.125038 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:57:04.159461 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jul 2 08:57:04.216464 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:57:04.226087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:57:04.356924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:57:04.368412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:57:04.417288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:57:04.421541 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:57:04.423906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:57:04.423968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:57:04.432562 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:57:04.492571 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:57:04.564821 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:57:04.564896 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 08:57:04.602440 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 08:57:04.602721 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 08:57:04.603034 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 08:57:04.603066 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 08:57:04.603336 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b2:ff:e8:91:39 Jul 2 08:57:04.575033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:57:04.575259 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:04.578721 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:57:04.583165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:57:04.583389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:04.585637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:04.596685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:04.620813 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 08:57:04.628838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:57:04.628912 kernel: GPT:9289727 != 16777215 Jul 2 08:57:04.631619 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:57:04.631689 kernel: GPT:9289727 != 16777215 Jul 2 08:57:04.631715 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:57:04.631795 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:04.636524 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:57:04.646286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:04.658136 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:57:04.710548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:04.747927 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 08:57:04.760931 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (524) Jul 2 08:57:04.778843 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (538) Jul 2 08:57:04.832378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 08:57:04.889546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 08:57:04.894336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 08:57:04.912082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:57:04.926477 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:57:04.937954 disk-uuid[657]: Primary Header is updated. Jul 2 08:57:04.937954 disk-uuid[657]: Secondary Entries is updated. Jul 2 08:57:04.937954 disk-uuid[657]: Secondary Header is updated. Jul 2 08:57:04.948813 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:04.957432 kernel: GPT:disk_guids don't match. Jul 2 08:57:04.957498 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:57:04.958322 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:04.965824 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:05.971593 disk-uuid[658]: The operation has completed successfully. Jul 2 08:57:05.974562 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:57:06.144574 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:57:06.144824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:57:06.202052 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:57:06.214058 sh[1003]: Success Jul 2 08:57:06.240856 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:57:06.349804 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:57:06.372282 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:57:06.380426 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:57:06.411811 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 08:57:06.411875 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:06.411913 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:57:06.413402 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:57:06.414579 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:57:06.433810 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 08:57:06.435291 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:57:06.438416 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:57:06.456361 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:57:06.462066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:57:06.490587 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:06.490654 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:06.490683 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:57:06.497821 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:57:06.517897 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:57:06.520202 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:06.532764 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:57:06.543107 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:57:06.662982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:57:06.673115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:57:06.732035 systemd-networkd[1197]: lo: Link UP Jul 2 08:57:06.732057 systemd-networkd[1197]: lo: Gained carrier Jul 2 08:57:06.738436 systemd-networkd[1197]: Enumeration completed Jul 2 08:57:06.739199 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:06.739206 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:57:06.742270 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:57:06.751113 systemd-networkd[1197]: eth0: Link UP Jul 2 08:57:06.751122 systemd-networkd[1197]: eth0: Gained carrier Jul 2 08:57:06.751140 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:06.754291 systemd[1]: Reached target network.target - Network. Jul 2 08:57:06.769893 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.24.171/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:57:06.829636 ignition[1102]: Ignition 2.18.0 Jul 2 08:57:06.830168 ignition[1102]: Stage: fetch-offline Jul 2 08:57:06.830704 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:06.830727 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:06.831277 ignition[1102]: Ignition finished successfully Jul 2 08:57:06.840847 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:57:06.851077 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:57:06.883518 ignition[1207]: Ignition 2.18.0 Jul 2 08:57:06.883540 ignition[1207]: Stage: fetch Jul 2 08:57:06.884191 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:06.884216 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:06.885195 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:06.893832 ignition[1207]: PUT result: OK Jul 2 08:57:06.896599 ignition[1207]: parsed url from cmdline: "" Jul 2 08:57:06.896615 ignition[1207]: no config URL provided Jul 2 08:57:06.896630 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:57:06.896655 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:57:06.896687 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:06.898332 ignition[1207]: PUT result: OK Jul 2 08:57:06.898422 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 08:57:06.900509 ignition[1207]: GET result: OK Jul 2 08:57:06.901526 ignition[1207]: parsing config with SHA512: 5d6f7ac3d1de29d77d8dbbe2eeb13fae6a5f625f2096dd3302fcf62040a204cb048291315ee60be63a7fd39579b3075a964bae041f400bb28a34d5f931bdb984 Jul 2 08:57:06.913486 unknown[1207]: fetched base config from "system" Jul 2 08:57:06.913511 unknown[1207]: fetched base config from "system" Jul 2 08:57:06.913525 unknown[1207]: fetched user config from "aws" Jul 2 08:57:06.919701 ignition[1207]: fetch: fetch complete Jul 2 08:57:06.922053 ignition[1207]: fetch: fetch passed Jul 2 08:57:06.923194 ignition[1207]: Ignition finished successfully Jul 2 08:57:06.927472 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:57:06.938185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:57:06.975688 ignition[1214]: Ignition 2.18.0 Jul 2 08:57:06.975714 ignition[1214]: Stage: kargs Jul 2 08:57:06.976373 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:06.976399 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:06.977295 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:06.980503 ignition[1214]: PUT result: OK Jul 2 08:57:06.987605 ignition[1214]: kargs: kargs passed Jul 2 08:57:06.987710 ignition[1214]: Ignition finished successfully Jul 2 08:57:06.991299 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:57:06.999073 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:57:07.033565 ignition[1221]: Ignition 2.18.0 Jul 2 08:57:07.033592 ignition[1221]: Stage: disks Jul 2 08:57:07.035462 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:07.035492 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:07.035645 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:07.040843 ignition[1221]: PUT result: OK Jul 2 08:57:07.046903 ignition[1221]: disks: disks passed Jul 2 08:57:07.047059 ignition[1221]: Ignition finished successfully Jul 2 08:57:07.060762 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:57:07.065850 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:57:07.066557 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:57:07.075113 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:57:07.079037 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:57:07.081221 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:57:07.099060 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:57:07.136625 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:57:07.143701 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:57:07.155650 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:57:07.248810 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 08:57:07.251179 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:57:07.254603 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:57:07.287957 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:57:07.294167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:57:07.297875 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:57:07.297977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:57:07.298025 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:57:07.327823 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Jul 2 08:57:07.334247 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:07.334321 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:07.336346 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:57:07.349426 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:57:07.352798 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:57:07.356036 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:57:07.361936 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:57:07.538249 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:57:07.547265 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:57:07.556240 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:57:07.564439 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:57:07.760462 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:57:07.770005 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:57:07.774058 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:57:07.806086 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:57:07.810837 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:07.840842 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:57:07.861855 ignition[1362]: INFO : Ignition 2.18.0 Jul 2 08:57:07.861855 ignition[1362]: INFO : Stage: mount Jul 2 08:57:07.864915 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:07.864915 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:07.864915 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:07.871387 ignition[1362]: INFO : PUT result: OK Jul 2 08:57:07.875639 ignition[1362]: INFO : mount: mount passed Jul 2 08:57:07.875639 ignition[1362]: INFO : Ignition finished successfully Jul 2 08:57:07.878041 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:57:07.905135 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:57:07.918823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:57:07.947893 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Jul 2 08:57:07.951558 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:57:07.951604 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:57:07.951630 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:57:07.956811 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:57:07.960405 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:57:08.001453 ignition[1391]: INFO : Ignition 2.18.0 Jul 2 08:57:08.001453 ignition[1391]: INFO : Stage: files Jul 2 08:57:08.004842 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:08.004842 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:08.004842 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:08.011274 ignition[1391]: INFO : PUT result: OK Jul 2 08:57:08.015598 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:57:08.018746 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:57:08.018746 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:57:08.028480 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:57:08.031133 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:57:08.034060 unknown[1391]: wrote ssh authorized keys file for user: core Jul 2 08:57:08.036228 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:57:08.041063 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 08:57:08.044306 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 08:57:08.044306 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:57:08.044306 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:57:08.144440 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:57:08.258217 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:57:08.262047 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 08:57:08.291425 systemd-networkd[1197]: eth0: Gained IPv6LL Jul 2 08:57:08.712988 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 08:57:09.098090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 08:57:09.098090 ignition[1391]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:57:09.106681 ignition[1391]: INFO : files: files passed Jul 2 08:57:09.106681 ignition[1391]: INFO : Ignition finished successfully Jul 2 08:57:09.143226 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:57:09.167086 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:57:09.176683 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:57:09.188177 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:57:09.189298 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:57:09.209758 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:57:09.209758 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:57:09.217480 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:57:09.223409 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:57:09.226250 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:57:09.244145 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:57:09.293116 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:57:09.293524 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:57:09.301478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:57:09.304103 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:57:09.309299 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:57:09.326173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:57:09.353936 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:57:09.364076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:57:09.395441 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:57:09.399205 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:57:09.402863 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:57:09.405728 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:57:09.406583 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:57:09.414383 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:57:09.416807 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:57:09.421332 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:57:09.423571 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:57:09.430145 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:57:09.432358 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:57:09.434415 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:57:09.442233 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:57:09.444404 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:57:09.450298 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:57:09.451934 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:57:09.452167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:57:09.459105 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:57:09.461181 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:57:09.463458 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:57:09.468510 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:57:09.474426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:57:09.474667 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:57:09.477056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:57:09.477284 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:57:09.479999 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:57:09.480230 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:57:09.499289 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:57:09.509366 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:57:09.514603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:57:09.514948 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:57:09.517388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:57:09.517642 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:57:09.534816 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:57:09.535029 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:57:09.555512 ignition[1444]: INFO : Ignition 2.18.0 Jul 2 08:57:09.555512 ignition[1444]: INFO : Stage: umount Jul 2 08:57:09.560850 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:57:09.560850 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:57:09.560850 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:57:09.569837 ignition[1444]: INFO : PUT result: OK Jul 2 08:57:09.574834 ignition[1444]: INFO : umount: umount passed Jul 2 08:57:09.577074 ignition[1444]: INFO : Ignition finished successfully Jul 2 08:57:09.574894 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:57:09.580332 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:57:09.581204 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:57:09.588425 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:57:09.588597 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:57:09.600032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:57:09.600154 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:57:09.602092 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:57:09.602179 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:57:09.604047 systemd[1]: Stopped target network.target - Network. Jul 2 08:57:09.605994 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:57:09.606111 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:57:09.608234 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:57:09.609974 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:57:09.618158 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:57:09.618504 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:57:09.618612 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:57:09.619548 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:57:09.619629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:57:09.620599 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:57:09.620691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:57:09.622268 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:57:09.622358 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:57:09.622571 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:57:09.622651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:57:09.623234 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:57:09.623898 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:57:09.649068 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jul 2 08:57:09.655539 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:57:09.656433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:57:09.660478 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:57:09.660707 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:57:09.673642 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:57:09.693434 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:57:09.693590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:57:09.697272 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:57:09.716566 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:57:09.717689 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:57:09.734454 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:57:09.736902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:57:09.744066 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:57:09.744731 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:57:09.756566 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:57:09.757597 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:57:09.764925 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:57:09.765017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:57:09.767458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:57:09.767527 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:57:09.770748 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:57:09.771172 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:57:09.781473 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:57:09.781585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:57:09.785114 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:57:09.785204 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:57:09.792872 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:57:09.792966 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:57:09.806183 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:57:09.810764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:57:09.811032 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:57:09.818835 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:57:09.818941 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:57:09.820964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:57:09.821053 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:57:09.823520 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 08:57:09.823598 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:57:09.826104 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:57:09.826180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:57:09.828659 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:57:09.828738 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:57:09.831292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:57:09.831376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:09.885885 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:57:09.886286 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:57:09.892487 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:57:09.902165 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:57:09.923806 systemd[1]: Switching root. Jul 2 08:57:09.965011 systemd-journald[250]: Journal stopped Jul 2 08:57:11.729641 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jul 2 08:57:11.729765 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:57:11.729862 kernel: SELinux: policy capability open_perms=1 Jul 2 08:57:11.729895 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:57:11.729926 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:57:11.729958 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:57:11.729987 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:57:11.730018 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:57:11.730048 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:57:11.730087 kernel: audit: type=1403 audit(1719910630.255:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:57:11.730125 systemd[1]: Successfully loaded SELinux policy in 47.098ms. Jul 2 08:57:11.730179 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.035ms. Jul 2 08:57:11.730214 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:57:11.730246 systemd[1]: Detected virtualization amazon. Jul 2 08:57:11.730276 systemd[1]: Detected architecture arm64. Jul 2 08:57:11.730306 systemd[1]: Detected first boot. Jul 2 08:57:11.730338 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:57:11.730369 zram_generator::config[1503]: No configuration found. Jul 2 08:57:11.730403 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:57:11.730438 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:57:11.730469 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 08:57:11.730504 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:57:11.730536 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:57:11.730565 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:57:11.730596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:57:11.730629 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:57:11.730662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:57:11.730695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:57:11.730730 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:57:11.730762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:57:11.730830 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:57:11.730865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:57:11.730895 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:57:11.730927 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:57:11.730958 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:57:11.731010 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 08:57:11.731046 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:57:11.731083 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:57:11.731114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:57:11.731145 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:57:11.731175 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:57:11.731206 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:57:11.731235 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:57:11.731264 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:57:11.731296 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:57:11.731330 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:57:11.731360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:57:11.731389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:57:11.731418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:57:11.731447 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:57:11.731476 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:57:11.731508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:57:11.731539 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:57:11.731573 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:57:11.731607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:57:11.731649 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:57:11.731683 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:57:11.731713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:57:11.731742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:57:11.731819 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:57:11.731856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:57:11.731889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:57:11.731919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:57:11.731953 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:57:11.731985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:57:11.732017 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:57:11.732049 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 08:57:11.732081 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 08:57:11.732112 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:57:11.732140 kernel: loop: module loaded Jul 2 08:57:11.732172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:57:11.732205 kernel: ACPI: bus type drm_connector registered Jul 2 08:57:11.732234 kernel: fuse: init (API version 7.39) Jul 2 08:57:11.732280 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:57:11.732313 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:57:11.732343 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:57:11.732374 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:57:11.732403 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:57:11.732432 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:57:11.732461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:57:11.732496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:57:11.732526 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:57:11.732557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:57:11.732589 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:57:11.732620 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:57:11.732649 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:57:11.732681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:57:11.732713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:57:11.732748 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:57:11.732811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:57:11.732848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:57:11.732878 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:57:11.732955 systemd-journald[1602]: Collecting audit messages is disabled. Jul 2 08:57:11.733088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:57:11.733672 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:57:11.733894 systemd-journald[1602]: Journal started Jul 2 08:57:11.733945 systemd-journald[1602]: Runtime Journal (/run/log/journal/ec277be5cf53c260da118cead3744c4f) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:57:11.740864 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:57:11.742948 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:57:11.745113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:57:11.748893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:57:11.766617 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:57:11.771636 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:57:11.801560 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:57:11.810982 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:57:11.825907 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:57:11.828034 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:57:11.844128 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:57:11.858993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:57:11.861960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:57:11.872910 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:57:11.876050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:57:11.897037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:57:11.914104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:57:11.921451 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:57:11.925302 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:57:11.956977 systemd-journald[1602]: Time spent on flushing to /var/log/journal/ec277be5cf53c260da118cead3744c4f is 72.912ms for 898 entries. Jul 2 08:57:11.956977 systemd-journald[1602]: System Journal (/var/log/journal/ec277be5cf53c260da118cead3744c4f) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:57:12.063946 systemd-journald[1602]: Received client request to flush runtime journal. Jul 2 08:57:11.957567 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:57:11.971152 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:57:11.974145 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:57:11.978899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:57:12.019165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:57:12.032126 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Jul 2 08:57:12.032151 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Jul 2 08:57:12.043135 udevadm[1660]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:57:12.047961 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:57:12.064078 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:57:12.078191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:57:12.130431 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:57:12.142077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:57:12.186338 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jul 2 08:57:12.186979 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jul 2 08:57:12.197362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:57:12.868020 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:57:12.878131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:57:12.940408 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Jul 2 08:57:12.974423 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:57:12.988120 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:57:13.025153 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:57:13.075432 (udev-worker)[1700]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:57:13.191816 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1694) Jul 2 08:57:13.211957 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:57:13.251659 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 08:57:13.376819 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1695) Jul 2 08:57:13.383279 systemd-networkd[1689]: lo: Link UP Jul 2 08:57:13.384068 systemd-networkd[1689]: lo: Gained carrier Jul 2 08:57:13.389146 systemd-networkd[1689]: Enumeration completed Jul 2 08:57:13.389533 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:57:13.392638 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:13.392660 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:57:13.397216 systemd-networkd[1689]: eth0: Link UP Jul 2 08:57:13.397540 systemd-networkd[1689]: eth0: Gained carrier Jul 2 08:57:13.397576 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:57:13.402171 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:57:13.413889 systemd-networkd[1689]: eth0: DHCPv4 address 172.31.24.171/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:57:13.447951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:57:13.604920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:57:13.627981 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:57:13.670181 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:57:13.679070 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:57:13.708800 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:57:13.748565 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:57:13.751651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:57:13.760089 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:57:13.776636 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:57:13.818384 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:57:13.822542 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:57:13.825135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:57:13.825336 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:57:13.827509 systemd[1]: Reached target machines.target - Containers. Jul 2 08:57:13.831163 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:57:13.839093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:57:13.849185 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:57:13.851521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:57:13.857049 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:57:13.873055 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:57:13.879641 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:57:13.888678 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:57:13.931067 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 08:57:13.931551 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:57:13.948115 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:57:13.950435 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:57:13.956010 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:57:13.971815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:57:13.997829 kernel: loop1: detected capacity change from 0 to 51896 Jul 2 08:57:14.035821 kernel: loop2: detected capacity change from 0 to 193208 Jul 2 08:57:14.147957 kernel: loop3: detected capacity change from 0 to 59672 Jul 2 08:57:14.205112 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 08:57:14.222825 kernel: loop5: detected capacity change from 0 to 51896 Jul 2 08:57:14.235391 kernel: loop6: detected capacity change from 0 to 193208 Jul 2 08:57:14.261806 kernel: loop7: detected capacity change from 0 to 59672 Jul 2 08:57:14.284917 (sd-merge)[1840]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 08:57:14.286698 (sd-merge)[1840]: Merged extensions into '/usr'. Jul 2 08:57:14.294994 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:57:14.295027 systemd[1]: Reloading... Jul 2 08:57:14.440046 zram_generator::config[1872]: No configuration found. Jul 2 08:57:14.538507 ldconfig[1821]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:57:14.703491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:57:14.818922 systemd-networkd[1689]: eth0: Gained IPv6LL Jul 2 08:57:14.847933 systemd[1]: Reloading finished in 552 ms. Jul 2 08:57:14.876977 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:57:14.880766 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:57:14.884300 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:57:14.898145 systemd[1]: Starting ensure-sysext.service... Jul 2 08:57:14.911151 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:57:14.927031 systemd[1]: Reloading requested from client PID 1927 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:57:14.927064 systemd[1]: Reloading... Jul 2 08:57:14.948534 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:57:14.949218 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:57:14.951691 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:57:14.952463 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Jul 2 08:57:14.952714 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Jul 2 08:57:14.958525 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:57:14.958718 systemd-tmpfiles[1928]: Skipping /boot Jul 2 08:57:14.977388 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:57:14.977409 systemd-tmpfiles[1928]: Skipping /boot Jul 2 08:57:15.080903 zram_generator::config[1955]: No configuration found. Jul 2 08:57:15.309431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:57:15.451247 systemd[1]: Reloading finished in 523 ms. Jul 2 08:57:15.479598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:57:15.506161 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:57:15.514341 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:57:15.527037 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:57:15.543160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:57:15.560312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:57:15.587323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:57:15.592859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:57:15.600273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:57:15.616017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:57:15.618747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:57:15.634685 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:57:15.651522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:57:15.652819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:57:15.669647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:57:15.676049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:57:15.682122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:57:15.684906 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:57:15.685300 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:57:15.694271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:57:15.700479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:57:15.700950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:57:15.704920 augenrules[2041]: No rules Jul 2 08:57:15.720211 systemd[1]: Finished ensure-sysext.service. Jul 2 08:57:15.724561 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:57:15.740681 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:57:15.741285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:57:15.745738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:57:15.749918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:57:15.750290 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:57:15.752919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:57:15.762262 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:57:15.765141 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:57:15.787023 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:57:15.817730 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:57:15.822708 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:57:15.877008 systemd-resolved[2024]: Positive Trust Anchors: Jul 2 08:57:15.877551 systemd-resolved[2024]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:57:15.877733 systemd-resolved[2024]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:57:15.885983 systemd-resolved[2024]: Defaulting to hostname 'linux'. Jul 2 08:57:15.889532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:57:15.892061 systemd[1]: Reached target network.target - Network. Jul 2 08:57:15.894017 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:57:15.896273 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:57:15.898396 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:57:15.900497 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:57:15.902710 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:57:15.905497 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:57:15.908146 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:57:15.910450 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:57:15.912691 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:57:15.912743 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:57:15.914399 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:57:15.917295 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:57:15.922593 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:57:15.926885 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:57:15.935715 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:57:15.937867 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:57:15.939687 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:57:15.941719 systemd[1]: System is tainted: cgroupsv1 Jul 2 08:57:15.941824 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:57:15.941872 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:57:15.952069 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:57:15.959069 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:57:15.973266 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:57:15.979866 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:57:15.987338 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:57:15.991934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:57:16.009075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:57:16.018416 jq[2072]: false Jul 2 08:57:16.028061 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:57:16.053339 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 08:57:16.067024 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:57:16.075872 dbus-daemon[2071]: [system] SELinux support is enabled Jul 2 08:57:16.075977 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:57:16.088018 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 08:57:16.091985 dbus-daemon[2071]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1689 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 08:57:16.112193 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:57:16.133106 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:57:16.147126 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:57:16.151724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:57:16.166088 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:57:16.191554 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:57:16.197069 ntpd[2080]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: ---------------------------------------------------- Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: corporation. Support and training for ntp-4 are Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: available at https://www.nwtime.org/support Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: ---------------------------------------------------- Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: proto: precision = 0.096 usec (-23) Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: basedate set to 2024-06-19 Jul 2 08:57:16.206226 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: gps base set to 2024-06-23 (week 2320) Jul 2 08:57:16.197115 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:57:16.207454 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:57:16.207454 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:57:16.197135 ntpd[2080]: ---------------------------------------------------- Jul 2 08:57:16.207647 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:57:16.207647 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listen normally on 3 eth0 172.31.24.171:123 Jul 2 08:57:16.207647 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listen normally on 4 lo [::1]:123 Jul 2 08:57:16.197154 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:57:16.208410 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:57:16.224250 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listen normally on 5 eth0 [fe80::4b2:ffff:fee8:9139%2]:123 Jul 2 08:57:16.224250 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: Listening on routing socket on fd #22 for interface updates Jul 2 08:57:16.224250 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:16.224250 ntpd[2080]: 2 Jul 08:57:16 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:16.197172 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:57:16.197190 ntpd[2080]: corporation. Support and training for ntp-4 are Jul 2 08:57:16.197213 ntpd[2080]: available at https://www.nwtime.org/support Jul 2 08:57:16.197230 ntpd[2080]: ---------------------------------------------------- Jul 2 08:57:16.199949 ntpd[2080]: proto: precision = 0.096 usec (-23) Jul 2 08:57:16.203391 ntpd[2080]: basedate set to 2024-06-19 Jul 2 08:57:16.203422 ntpd[2080]: gps base set to 2024-06-23 (week 2320) Jul 2 08:57:16.207154 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:57:16.207225 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:57:16.207473 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:57:16.207533 ntpd[2080]: Listen normally on 3 eth0 172.31.24.171:123 Jul 2 08:57:16.207629 ntpd[2080]: Listen normally on 4 lo [::1]:123 Jul 2 08:57:16.207703 ntpd[2080]: Listen normally on 5 eth0 [fe80::4b2:ffff:fee8:9139%2]:123 Jul 2 08:57:16.207763 ntpd[2080]: Listening on routing socket on fd #22 for interface updates Jul 2 08:57:16.212597 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:16.212655 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:57:16.228213 extend-filesystems[2073]: Found loop4 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found loop5 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found loop6 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found loop7 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p1 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p2 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p3 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found usr Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p4 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p6 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p7 Jul 2 08:57:16.259304 extend-filesystems[2073]: Found nvme0n1p9 Jul 2 08:57:16.259304 extend-filesystems[2073]: Checking size of /dev/nvme0n1p9 Jul 2 08:57:16.247486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:57:16.318513 jq[2102]: true Jul 2 08:57:16.248002 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:57:16.277452 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:57:16.278085 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:57:16.347655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:57:16.348247 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:57:16.363500 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:57:16.412618 coreos-metadata[2069]: Jul 02 08:57:16.411 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:57:16.424820 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 08:57:16.424910 extend-filesystems[2073]: Resized partition /dev/nvme0n1p9 Jul 2 08:57:16.434157 extend-filesystems[2130]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:57:16.458500 coreos-metadata[2069]: Jul 02 08:57:16.432 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 08:57:16.458500 coreos-metadata[2069]: Jul 02 08:57:16.444 INFO Fetch successful Jul 2 08:57:16.458500 coreos-metadata[2069]: Jul 02 08:57:16.444 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 08:57:16.458500 coreos-metadata[2069]: Jul 02 08:57:16.457 INFO Fetch successful Jul 2 08:57:16.458500 coreos-metadata[2069]: Jul 02 08:57:16.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 08:57:16.466968 coreos-metadata[2069]: Jul 02 08:57:16.460 INFO Fetch successful Jul 2 08:57:16.466968 coreos-metadata[2069]: Jul 02 08:57:16.460 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 08:57:16.471796 coreos-metadata[2069]: Jul 02 08:57:16.467 INFO Fetch successful Jul 2 08:57:16.471796 coreos-metadata[2069]: Jul 02 08:57:16.467 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 08:57:16.475386 coreos-metadata[2069]: Jul 02 08:57:16.475 INFO Fetch failed with 404: resource not found Jul 2 08:57:16.475386 coreos-metadata[2069]: Jul 02 08:57:16.475 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 08:57:16.477059 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:57:16.478468 dbus-daemon[2071]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:57:16.479939 coreos-metadata[2069]: Jul 02 08:57:16.479 INFO Fetch successful Jul 2 08:57:16.479939 coreos-metadata[2069]: Jul 02 08:57:16.479 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 08:57:16.486178 tar[2115]: linux-arm64/helm Jul 2 08:57:16.480880 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:57:16.495833 coreos-metadata[2069]: Jul 02 08:57:16.495 INFO Fetch successful Jul 2 08:57:16.495833 coreos-metadata[2069]: Jul 02 08:57:16.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 08:57:16.483348 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:57:16.483386 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:57:16.492016 (ntainerd)[2129]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:57:16.528971 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 08:57:16.502049 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 08:57:16.567061 coreos-metadata[2069]: Jul 02 08:57:16.497 INFO Fetch successful Jul 2 08:57:16.567061 coreos-metadata[2069]: Jul 02 08:57:16.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 08:57:16.567061 coreos-metadata[2069]: Jul 02 08:57:16.506 INFO Fetch successful Jul 2 08:57:16.567061 coreos-metadata[2069]: Jul 02 08:57:16.506 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 08:57:16.567061 coreos-metadata[2069]: Jul 02 08:57:16.524 INFO Fetch successful Jul 2 08:57:16.567301 update_engine[2096]: I0702 08:57:16.508741 2096 main.cc:92] Flatcar Update Engine starting Jul 2 08:57:16.567301 update_engine[2096]: I0702 08:57:16.546072 2096 update_check_scheduler.cc:74] Next update check in 6m36s Jul 2 08:57:16.628526 jq[2119]: true Jul 2 08:57:16.543496 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:57:16.629033 extend-filesystems[2130]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 08:57:16.629033 extend-filesystems[2130]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:57:16.629033 extend-filesystems[2130]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 08:57:16.547236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:57:16.677567 extend-filesystems[2073]: Resized filesystem in /dev/nvme0n1p9 Jul 2 08:57:16.630758 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:57:16.680559 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:57:16.681100 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:57:16.694082 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 08:57:16.710082 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 08:57:16.795511 systemd-logind[2093]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:57:16.795565 systemd-logind[2093]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 08:57:16.798117 systemd-logind[2093]: New seat seat0. Jul 2 08:57:16.802525 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:57:16.811390 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:57:16.813710 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:57:16.937448 bash[2190]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:57:16.956751 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:57:16.999887 locksmithd[2145]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: Initializing new seelog logger Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: New Seelog Logger Creation Complete Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 processing appconfig overrides Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 processing appconfig overrides Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 processing appconfig overrides Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO Proxy environment variables: Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:57:17.058482 amazon-ssm-agent[2160]: 2024/07/02 08:57:17 processing appconfig overrides Jul 2 08:57:17.063341 containerd[2129]: time="2024-07-02T08:57:16.969297863Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:57:17.090493 systemd[1]: Starting sshkeys.service... Jul 2 08:57:17.114016 containerd[2129]: time="2024-07-02T08:57:17.111142472Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:57:17.114016 containerd[2129]: time="2024-07-02T08:57:17.111217832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.130059356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.130133228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.130594976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.130644032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.133027532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.133188116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.133219796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134076 containerd[2129]: time="2024-07-02T08:57:17.133396088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.134520 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO https_proxy: Jul 2 08:57:17.138834 containerd[2129]: time="2024-07-02T08:57:17.138323408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.138834 containerd[2129]: time="2024-07-02T08:57:17.138400988Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:57:17.138834 containerd[2129]: time="2024-07-02T08:57:17.138446768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:57:17.145427 containerd[2129]: time="2024-07-02T08:57:17.138823868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:57:17.145427 containerd[2129]: time="2024-07-02T08:57:17.138870836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:57:17.145427 containerd[2129]: time="2024-07-02T08:57:17.139051196Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:57:17.145427 containerd[2129]: time="2024-07-02T08:57:17.139092344Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:57:17.148344 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152218928Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152330348Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152366528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152531300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152583644Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152611868Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152644256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152947580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.152992064Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.153024596Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.153057236Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.153090044Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.153130280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.154834 containerd[2129]: time="2024-07-02T08:57:17.153161744Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.155687 containerd[2129]: time="2024-07-02T08:57:17.153193928Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.155687 containerd[2129]: time="2024-07-02T08:57:17.153226388Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.155687 containerd[2129]: time="2024-07-02T08:57:17.153269312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.155687 containerd[2129]: time="2024-07-02T08:57:17.153302060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.155687 containerd[2129]: time="2024-07-02T08:57:17.153329672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:57:17.155687 containerd[2129]: time="2024-07-02T08:57:17.153564296Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:57:17.167481 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181524548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181613324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181650980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181733324Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181882124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181923716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181957112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.181987736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.182018888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.182048396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.182079512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.182110076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.182149592Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:57:17.183805 containerd[2129]: time="2024-07-02T08:57:17.182514488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.184558 containerd[2129]: time="2024-07-02T08:57:17.182567492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.184558 containerd[2129]: time="2024-07-02T08:57:17.182598596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.184558 containerd[2129]: time="2024-07-02T08:57:17.182637608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.184558 containerd[2129]: time="2024-07-02T08:57:17.182672288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.184558 containerd[2129]: time="2024-07-02T08:57:17.182705456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.184558 containerd[2129]: time="2024-07-02T08:57:17.182734772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.199183 containerd[2129]: time="2024-07-02T08:57:17.182762384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:57:17.199380 containerd[2129]: time="2024-07-02T08:57:17.195999188Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:57:17.199380 containerd[2129]: time="2024-07-02T08:57:17.196140356Z" level=info msg="Connect containerd service" Jul 2 08:57:17.199380 containerd[2129]: time="2024-07-02T08:57:17.196222220Z" level=info msg="using legacy CRI server" Jul 2 08:57:17.199380 containerd[2129]: time="2024-07-02T08:57:17.196268336Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:57:17.199380 containerd[2129]: time="2024-07-02T08:57:17.196454252Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217092009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217219137Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217366497Z" level=info msg="Start subscribing containerd event" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217434573Z" level=info msg="Start recovering state" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217571469Z" level=info msg="Start event monitor" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217597965Z" level=info msg="Start snapshots syncer" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217622505Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:57:17.225122 containerd[2129]: time="2024-07-02T08:57:17.217643049Z" level=info msg="Start streaming server" Jul 2 08:57:17.226821 containerd[2129]: time="2024-07-02T08:57:17.226099149Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:57:17.226821 containerd[2129]: time="2024-07-02T08:57:17.226163037Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:57:17.226821 containerd[2129]: time="2024-07-02T08:57:17.226201137Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:57:17.226821 containerd[2129]: time="2024-07-02T08:57:17.226558689Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:57:17.226821 containerd[2129]: time="2024-07-02T08:57:17.226701573Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:57:17.229097 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:57:17.232901 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO http_proxy: Jul 2 08:57:17.237263 containerd[2129]: time="2024-07-02T08:57:17.229171329Z" level=info msg="containerd successfully booted in 0.261582s" Jul 2 08:57:17.312833 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2191) Jul 2 08:57:17.340914 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO no_proxy: Jul 2 08:57:17.423299 dbus-daemon[2071]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 08:57:17.423578 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 08:57:17.432965 dbus-daemon[2071]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2139 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 08:57:17.446027 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO Checking if agent identity type OnPrem can be assumed Jul 2 08:57:17.465035 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 08:57:17.519881 coreos-metadata[2209]: Jul 02 08:57:17.518 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:57:17.527962 coreos-metadata[2209]: Jul 02 08:57:17.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 08:57:17.530818 coreos-metadata[2209]: Jul 02 08:57:17.530 INFO Fetch successful Jul 2 08:57:17.530818 coreos-metadata[2209]: Jul 02 08:57:17.530 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:57:17.533969 coreos-metadata[2209]: Jul 02 08:57:17.533 INFO Fetch successful Jul 2 08:57:17.537522 polkitd[2238]: Started polkitd version 121 Jul 2 08:57:17.540541 unknown[2209]: wrote ssh authorized keys file for user: core Jul 2 08:57:17.547889 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO Checking if agent identity type EC2 can be assumed Jul 2 08:57:17.617923 update-ssh-keys[2268]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:57:17.622176 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 08:57:17.624851 polkitd[2238]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 08:57:17.624966 polkitd[2238]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 08:57:17.635917 systemd[1]: Finished sshkeys.service. Jul 2 08:57:17.639835 polkitd[2238]: Finished loading, compiling and executing 2 rules Jul 2 08:57:17.641231 dbus-daemon[2071]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 08:57:17.642659 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 08:57:17.647302 polkitd[2238]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 08:57:17.652383 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO Agent will take identity from EC2 Jul 2 08:57:17.708450 systemd-hostnamed[2139]: Hostname set to (transient) Jul 2 08:57:17.708615 systemd-resolved[2024]: System hostname changed to 'ip-172-31-24-171'. Jul 2 08:57:17.756722 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:57:17.868793 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:57:17.967250 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:57:18.067745 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 08:57:18.168911 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 08:57:18.273869 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 08:57:18.373954 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 08:57:18.475796 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [Registrar] Starting registrar module Jul 2 08:57:18.566201 tar[2115]: linux-arm64/LICENSE Jul 2 08:57:18.566755 tar[2115]: linux-arm64/README.md Jul 2 08:57:18.575943 amazon-ssm-agent[2160]: 2024-07-02 08:57:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 08:57:18.617679 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:57:18.761864 amazon-ssm-agent[2160]: 2024-07-02 08:57:18 INFO [EC2Identity] EC2 registration was successful. Jul 2 08:57:18.801363 amazon-ssm-agent[2160]: 2024-07-02 08:57:18 INFO [CredentialRefresher] credentialRefresher has started Jul 2 08:57:18.801363 amazon-ssm-agent[2160]: 2024-07-02 08:57:18 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 08:57:18.801562 amazon-ssm-agent[2160]: 2024-07-02 08:57:18 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 08:57:18.860798 amazon-ssm-agent[2160]: 2024-07-02 08:57:18 INFO [CredentialRefresher] Next credential rotation will be in 30.591659498866665 minutes Jul 2 08:57:19.052897 sshd_keygen[2105]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:57:19.095361 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:57:19.108987 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:57:19.131324 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:57:19.132073 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:57:19.146344 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:57:19.173495 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:57:19.184465 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:57:19.196355 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 08:57:19.199254 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:57:19.373180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:57:19.376920 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:57:19.379583 systemd[1]: Startup finished in 8.671s (kernel) + 9.168s (userspace) = 17.840s. Jul 2 08:57:19.391330 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:57:19.828264 amazon-ssm-agent[2160]: 2024-07-02 08:57:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 08:57:19.928928 amazon-ssm-agent[2160]: 2024-07-02 08:57:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2366) started Jul 2 08:57:20.029062 amazon-ssm-agent[2160]: 2024-07-02 08:57:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 08:57:20.773624 kubelet[2356]: E0702 08:57:20.773484 2356 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:57:20.778927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:57:20.779446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:57:25.616168 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:57:25.623680 systemd[1]: Started sshd@0-172.31.24.171:22-147.75.109.163:39786.service - OpenSSH per-connection server daemon (147.75.109.163:39786). Jul 2 08:57:25.814486 sshd[2380]: Accepted publickey for core from 147.75.109.163 port 39786 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:25.817764 sshd[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:25.832973 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:57:25.838206 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:57:25.843929 systemd-logind[2093]: New session 1 of user core. Jul 2 08:57:25.880382 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:57:25.893899 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:57:25.903709 (systemd)[2386]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:26.119998 systemd[2386]: Queued start job for default target default.target. Jul 2 08:57:26.120722 systemd[2386]: Created slice app.slice - User Application Slice. Jul 2 08:57:26.120797 systemd[2386]: Reached target paths.target - Paths. Jul 2 08:57:26.120834 systemd[2386]: Reached target timers.target - Timers. Jul 2 08:57:26.128939 systemd[2386]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:57:26.145540 systemd[2386]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:57:26.145670 systemd[2386]: Reached target sockets.target - Sockets. Jul 2 08:57:26.145703 systemd[2386]: Reached target basic.target - Basic System. Jul 2 08:57:26.145826 systemd[2386]: Reached target default.target - Main User Target. Jul 2 08:57:26.145892 systemd[2386]: Startup finished in 230ms. Jul 2 08:57:26.146186 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:57:26.150365 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:57:26.308957 systemd[1]: Started sshd@1-172.31.24.171:22-147.75.109.163:39796.service - OpenSSH per-connection server daemon (147.75.109.163:39796). Jul 2 08:57:26.493679 sshd[2398]: Accepted publickey for core from 147.75.109.163 port 39796 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:26.496381 sshd[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:26.506512 systemd-logind[2093]: New session 2 of user core. Jul 2 08:57:26.516511 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:57:26.653196 sshd[2398]: pam_unix(sshd:session): session closed for user core Jul 2 08:57:26.659233 systemd-logind[2093]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:57:26.659435 systemd[1]: sshd@1-172.31.24.171:22-147.75.109.163:39796.service: Deactivated successfully. Jul 2 08:57:26.666721 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:57:26.669715 systemd-logind[2093]: Removed session 2. Jul 2 08:57:26.685404 systemd[1]: Started sshd@2-172.31.24.171:22-147.75.109.163:39798.service - OpenSSH per-connection server daemon (147.75.109.163:39798). Jul 2 08:57:26.856451 sshd[2406]: Accepted publickey for core from 147.75.109.163 port 39798 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:26.858631 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:26.867393 systemd-logind[2093]: New session 3 of user core. Jul 2 08:57:26.880525 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:57:27.003150 sshd[2406]: pam_unix(sshd:session): session closed for user core Jul 2 08:57:27.008112 systemd[1]: sshd@2-172.31.24.171:22-147.75.109.163:39798.service: Deactivated successfully. Jul 2 08:57:27.015155 systemd-logind[2093]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:57:27.016733 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:57:27.020481 systemd-logind[2093]: Removed session 3. Jul 2 08:57:27.037315 systemd[1]: Started sshd@3-172.31.24.171:22-147.75.109.163:39814.service - OpenSSH per-connection server daemon (147.75.109.163:39814). Jul 2 08:57:27.206890 sshd[2414]: Accepted publickey for core from 147.75.109.163 port 39814 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:27.210073 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:27.219158 systemd-logind[2093]: New session 4 of user core. Jul 2 08:57:27.230383 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:57:27.364122 sshd[2414]: pam_unix(sshd:session): session closed for user core Jul 2 08:57:27.370158 systemd[1]: sshd@3-172.31.24.171:22-147.75.109.163:39814.service: Deactivated successfully. Jul 2 08:57:27.376004 systemd-logind[2093]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:57:27.380614 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:57:27.382085 systemd-logind[2093]: Removed session 4. Jul 2 08:57:27.395282 systemd[1]: Started sshd@4-172.31.24.171:22-147.75.109.163:39816.service - OpenSSH per-connection server daemon (147.75.109.163:39816). Jul 2 08:57:27.566701 sshd[2422]: Accepted publickey for core from 147.75.109.163 port 39816 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:27.569405 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:27.579315 systemd-logind[2093]: New session 5 of user core. Jul 2 08:57:27.589379 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:57:27.709870 sudo[2426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:57:27.710451 sudo[2426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:57:27.726268 sudo[2426]: pam_unix(sudo:session): session closed for user root Jul 2 08:57:27.750234 sshd[2422]: pam_unix(sshd:session): session closed for user core Jul 2 08:57:27.757625 systemd[1]: sshd@4-172.31.24.171:22-147.75.109.163:39816.service: Deactivated successfully. Jul 2 08:57:27.762568 systemd-logind[2093]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:57:27.764008 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:57:27.765856 systemd-logind[2093]: Removed session 5. Jul 2 08:57:27.780255 systemd[1]: Started sshd@5-172.31.24.171:22-147.75.109.163:39824.service - OpenSSH per-connection server daemon (147.75.109.163:39824). Jul 2 08:57:27.961317 sshd[2431]: Accepted publickey for core from 147.75.109.163 port 39824 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:27.963166 sshd[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:27.972988 systemd-logind[2093]: New session 6 of user core. Jul 2 08:57:27.979306 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:57:28.086481 sudo[2436]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:57:28.087542 sudo[2436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:57:28.093732 sudo[2436]: pam_unix(sudo:session): session closed for user root Jul 2 08:57:28.103457 sudo[2435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:57:28.103989 sudo[2435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:57:28.130253 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:57:28.135634 auditctl[2439]: No rules Jul 2 08:57:28.134765 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:57:28.135288 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:57:28.146539 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:57:28.195047 augenrules[2458]: No rules Jul 2 08:57:28.196596 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:57:28.200207 sudo[2435]: pam_unix(sudo:session): session closed for user root Jul 2 08:57:28.225129 sshd[2431]: pam_unix(sshd:session): session closed for user core Jul 2 08:57:28.230606 systemd[1]: sshd@5-172.31.24.171:22-147.75.109.163:39824.service: Deactivated successfully. Jul 2 08:57:28.237853 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:57:28.239105 systemd-logind[2093]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:57:28.240832 systemd-logind[2093]: Removed session 6. Jul 2 08:57:28.258259 systemd[1]: Started sshd@6-172.31.24.171:22-147.75.109.163:39834.service - OpenSSH per-connection server daemon (147.75.109.163:39834). Jul 2 08:57:28.424770 sshd[2467]: Accepted publickey for core from 147.75.109.163 port 39834 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:57:28.427238 sshd[2467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:57:28.434414 systemd-logind[2093]: New session 7 of user core. Jul 2 08:57:28.443383 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:57:28.549418 sudo[2471]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:57:28.550651 sudo[2471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:57:28.717291 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:57:28.718176 (dockerd)[2480]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:57:29.037968 dockerd[2480]: time="2024-07-02T08:57:29.037857555Z" level=info msg="Starting up" Jul 2 08:57:29.463641 dockerd[2480]: time="2024-07-02T08:57:29.463080675Z" level=info msg="Loading containers: start." Jul 2 08:57:29.622808 kernel: Initializing XFRM netlink socket Jul 2 08:57:29.654473 (udev-worker)[2493]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:57:29.737981 systemd-networkd[1689]: docker0: Link UP Jul 2 08:57:29.763588 dockerd[2480]: time="2024-07-02T08:57:29.763370463Z" level=info msg="Loading containers: done." Jul 2 08:57:29.841543 dockerd[2480]: time="2024-07-02T08:57:29.841460273Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:57:29.841876 dockerd[2480]: time="2024-07-02T08:57:29.841824067Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:57:29.842085 dockerd[2480]: time="2024-07-02T08:57:29.842039646Z" level=info msg="Daemon has completed initialization" Jul 2 08:57:29.894076 dockerd[2480]: time="2024-07-02T08:57:29.892935866Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:57:29.894991 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:57:30.947343 containerd[2129]: time="2024-07-02T08:57:30.947277941Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 08:57:31.025655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:57:31.043031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:57:31.378262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:57:31.402573 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:57:31.517466 kubelet[2626]: E0702 08:57:31.517356 2626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:57:31.529189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:57:31.529607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:57:31.689356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694169429.mount: Deactivated successfully. Jul 2 08:57:33.979183 containerd[2129]: time="2024-07-02T08:57:33.979124292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:33.981414 containerd[2129]: time="2024-07-02T08:57:33.981356555Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jul 2 08:57:33.982380 containerd[2129]: time="2024-07-02T08:57:33.982316555Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:33.989147 containerd[2129]: time="2024-07-02T08:57:33.989053062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:33.991684 containerd[2129]: time="2024-07-02T08:57:33.991447094Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 3.043495532s" Jul 2 08:57:33.991684 containerd[2129]: time="2024-07-02T08:57:33.991505888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 08:57:34.032170 containerd[2129]: time="2024-07-02T08:57:34.032105269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 08:57:36.481767 containerd[2129]: time="2024-07-02T08:57:36.481702313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:36.483994 containerd[2129]: time="2024-07-02T08:57:36.483937566Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jul 2 08:57:36.484824 containerd[2129]: time="2024-07-02T08:57:36.484357164Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:36.491187 containerd[2129]: time="2024-07-02T08:57:36.491100754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:36.493639 containerd[2129]: time="2024-07-02T08:57:36.493315104Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 2.461146263s" Jul 2 08:57:36.493639 containerd[2129]: time="2024-07-02T08:57:36.493376755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 08:57:36.534022 containerd[2129]: time="2024-07-02T08:57:36.533965404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 08:57:38.019399 containerd[2129]: time="2024-07-02T08:57:38.019333783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:38.021496 containerd[2129]: time="2024-07-02T08:57:38.021442121Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jul 2 08:57:38.022263 containerd[2129]: time="2024-07-02T08:57:38.022177248Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:38.028057 containerd[2129]: time="2024-07-02T08:57:38.027958557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:38.030470 containerd[2129]: time="2024-07-02T08:57:38.030271873Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.496241638s" Jul 2 08:57:38.030470 containerd[2129]: time="2024-07-02T08:57:38.030335769Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 08:57:38.069612 containerd[2129]: time="2024-07-02T08:57:38.069546789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 08:57:39.313326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105837812.mount: Deactivated successfully. Jul 2 08:57:39.830666 containerd[2129]: time="2024-07-02T08:57:39.830604494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:39.834687 containerd[2129]: time="2024-07-02T08:57:39.833658016Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jul 2 08:57:39.834687 containerd[2129]: time="2024-07-02T08:57:39.834153072Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:39.842810 containerd[2129]: time="2024-07-02T08:57:39.842700179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:39.843816 containerd[2129]: time="2024-07-02T08:57:39.843716247Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.774102164s" Jul 2 08:57:39.843816 containerd[2129]: time="2024-07-02T08:57:39.843804335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 08:57:39.885233 containerd[2129]: time="2024-07-02T08:57:39.885186412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:57:40.345680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999489299.mount: Deactivated successfully. Jul 2 08:57:40.351059 containerd[2129]: time="2024-07-02T08:57:40.350998049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:40.352661 containerd[2129]: time="2024-07-02T08:57:40.352578651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:57:40.352973 containerd[2129]: time="2024-07-02T08:57:40.352905634Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:40.357058 containerd[2129]: time="2024-07-02T08:57:40.356963338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:40.359431 containerd[2129]: time="2024-07-02T08:57:40.358732818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 473.312361ms" Jul 2 08:57:40.359431 containerd[2129]: time="2024-07-02T08:57:40.358814183Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:57:40.397256 containerd[2129]: time="2024-07-02T08:57:40.397193307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:57:40.995676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404493228.mount: Deactivated successfully. Jul 2 08:57:41.774222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:57:41.788334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:57:42.607151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:57:42.618441 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:57:42.751833 kubelet[2783]: E0702 08:57:42.751682 2783 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:57:42.757140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:57:42.757765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:57:44.373090 containerd[2129]: time="2024-07-02T08:57:44.373008785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:44.375340 containerd[2129]: time="2024-07-02T08:57:44.375283166Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 2 08:57:44.376203 containerd[2129]: time="2024-07-02T08:57:44.376121004Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:44.382340 containerd[2129]: time="2024-07-02T08:57:44.382285869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:44.386053 containerd[2129]: time="2024-07-02T08:57:44.385852000Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.988598567s" Jul 2 08:57:44.386053 containerd[2129]: time="2024-07-02T08:57:44.385916736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 08:57:44.424391 containerd[2129]: time="2024-07-02T08:57:44.424063436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 08:57:44.952379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670408732.mount: Deactivated successfully. Jul 2 08:57:45.438060 containerd[2129]: time="2024-07-02T08:57:45.437981861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:45.439744 containerd[2129]: time="2024-07-02T08:57:45.439676076Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jul 2 08:57:45.440626 containerd[2129]: time="2024-07-02T08:57:45.440541420Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:45.444874 containerd[2129]: time="2024-07-02T08:57:45.444819926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:57:45.448245 containerd[2129]: time="2024-07-02T08:57:45.448020858Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.023895471s" Jul 2 08:57:45.448245 containerd[2129]: time="2024-07-02T08:57:45.448077034Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 08:57:47.720311 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 08:57:51.659010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:57:51.673217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:57:51.719998 systemd[1]: Reloading requested from client PID 2877 ('systemctl') (unit session-7.scope)... Jul 2 08:57:51.720031 systemd[1]: Reloading... Jul 2 08:57:51.942809 zram_generator::config[2915]: No configuration found. Jul 2 08:57:52.186980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:57:52.346570 systemd[1]: Reloading finished in 625 ms. Jul 2 08:57:52.425203 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:57:52.425417 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:57:52.426117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:57:52.434453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:57:52.712145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:57:52.721499 (kubelet)[2988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:57:52.799877 kubelet[2988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:57:52.799877 kubelet[2988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:57:52.800425 kubelet[2988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:57:52.801866 kubelet[2988]: I0702 08:57:52.801762 2988 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:57:54.009858 kubelet[2988]: I0702 08:57:54.009750 2988 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:57:54.009858 kubelet[2988]: I0702 08:57:54.009825 2988 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:57:54.010528 kubelet[2988]: I0702 08:57:54.010160 2988 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:57:54.041101 kubelet[2988]: I0702 08:57:54.040735 2988 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:57:54.041339 kubelet[2988]: E0702 08:57:54.041314 2988 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.171:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.053914 kubelet[2988]: W0702 08:57:54.053856 2988 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:57:54.055306 kubelet[2988]: I0702 08:57:54.055230 2988 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:57:54.056057 kubelet[2988]: I0702 08:57:54.056013 2988 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:57:54.056363 kubelet[2988]: I0702 08:57:54.056318 2988 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:57:54.056536 kubelet[2988]: I0702 08:57:54.056380 2988 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:57:54.056536 kubelet[2988]: I0702 08:57:54.056402 2988 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:57:54.056646 kubelet[2988]: I0702 08:57:54.056602 2988 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:57:54.058973 kubelet[2988]: I0702 08:57:54.058925 2988 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:57:54.058973 kubelet[2988]: I0702 08:57:54.058973 2988 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:57:54.059131 kubelet[2988]: I0702 08:57:54.059041 2988 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:57:54.059131 kubelet[2988]: I0702 08:57:54.059088 2988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:57:54.062642 kubelet[2988]: W0702 08:57:54.061862 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.24.171:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-171&limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.062642 kubelet[2988]: E0702 08:57:54.061956 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.171:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-171&limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.062642 kubelet[2988]: W0702 08:57:54.062534 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.24.171:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.062642 kubelet[2988]: E0702 08:57:54.062592 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.171:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.063325 kubelet[2988]: I0702 08:57:54.063296 2988 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:57:54.066798 kubelet[2988]: W0702 08:57:54.066740 2988 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:57:54.067976 kubelet[2988]: I0702 08:57:54.067939 2988 server.go:1232] "Started kubelet" Jul 2 08:57:54.072539 kubelet[2988]: I0702 08:57:54.072496 2988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:57:54.072850 kubelet[2988]: E0702 08:57:54.072766 2988 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:57:54.073060 kubelet[2988]: E0702 08:57:54.072863 2988 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:57:54.076543 kubelet[2988]: I0702 08:57:54.075837 2988 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:57:54.077247 kubelet[2988]: I0702 08:57:54.077198 2988 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:57:54.079150 kubelet[2988]: I0702 08:57:54.079097 2988 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:57:54.079508 kubelet[2988]: I0702 08:57:54.079467 2988 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:57:54.083530 kubelet[2988]: I0702 08:57:54.083474 2988 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:57:54.087007 kubelet[2988]: I0702 08:57:54.086242 2988 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:57:54.087007 kubelet[2988]: I0702 08:57:54.086409 2988 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:57:54.089916 kubelet[2988]: E0702 08:57:54.088180 2988 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-24-171.17de59a545385512", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-24-171", UID:"ip-172-31-24-171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-24-171"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 57, 54, 67903762, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 57, 54, 67903762, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-24-171"}': 'Post "https://172.31.24.171:6443/api/v1/namespaces/default/events": dial tcp 172.31.24.171:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:57:54.089916 kubelet[2988]: W0702 08:57:54.089056 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.24.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.089916 kubelet[2988]: E0702 08:57:54.089147 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.090239 kubelet[2988]: E0702 08:57:54.089304 2988 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-171?timeout=10s\": dial tcp 172.31.24.171:6443: connect: connection refused" interval="200ms" Jul 2 08:57:54.128538 kubelet[2988]: I0702 08:57:54.128470 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:57:54.130604 kubelet[2988]: I0702 08:57:54.130560 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:57:54.130604 kubelet[2988]: I0702 08:57:54.130603 2988 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:57:54.130989 kubelet[2988]: I0702 08:57:54.130643 2988 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:57:54.130989 kubelet[2988]: E0702 08:57:54.130727 2988 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:57:54.151843 kubelet[2988]: W0702 08:57:54.151460 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.24.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.151843 kubelet[2988]: E0702 08:57:54.151531 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.190880 kubelet[2988]: I0702 08:57:54.190835 2988 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-171" Jul 2 08:57:54.191669 kubelet[2988]: E0702 08:57:54.191568 2988 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.171:6443/api/v1/nodes\": dial tcp 172.31.24.171:6443: connect: connection refused" node="ip-172-31-24-171" Jul 2 08:57:54.191669 kubelet[2988]: I0702 08:57:54.191624 2988 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:57:54.191669 kubelet[2988]: I0702 08:57:54.191645 2988 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:57:54.191669 kubelet[2988]: I0702 08:57:54.191675 2988 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:57:54.194099 kubelet[2988]: I0702 08:57:54.194059 2988 policy_none.go:49] "None policy: Start" Jul 2 08:57:54.195261 kubelet[2988]: I0702 08:57:54.195230 2988 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:57:54.195385 kubelet[2988]: I0702 08:57:54.195277 2988 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:57:54.204815 kubelet[2988]: I0702 08:57:54.203478 2988 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:57:54.204815 kubelet[2988]: I0702 08:57:54.203887 2988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:57:54.212171 kubelet[2988]: E0702 08:57:54.212111 2988 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-171\" not found" Jul 2 08:57:54.231311 kubelet[2988]: I0702 08:57:54.231259 2988 topology_manager.go:215] "Topology Admit Handler" podUID="b12b01a531d560fbf6543090236972f5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-171" Jul 2 08:57:54.233660 kubelet[2988]: I0702 08:57:54.233245 2988 topology_manager.go:215] "Topology Admit Handler" podUID="ee4232a9cf01e66113de0e7b9b74f6d9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-171" Jul 2 08:57:54.237859 kubelet[2988]: I0702 08:57:54.237385 2988 topology_manager.go:215] "Topology Admit Handler" podUID="4d7814f7c2796483e1d265f8bd35ad34" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-171" Jul 2 08:57:54.287271 kubelet[2988]: I0702 08:57:54.287154 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b12b01a531d560fbf6543090236972f5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-171\" (UID: \"b12b01a531d560fbf6543090236972f5\") " pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:57:54.287271 kubelet[2988]: I0702 08:57:54.287243 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:57:54.287608 kubelet[2988]: I0702 08:57:54.287296 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:57:54.287608 kubelet[2988]: I0702 08:57:54.287349 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:57:54.287608 kubelet[2988]: I0702 08:57:54.287396 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d7814f7c2796483e1d265f8bd35ad34-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-171\" (UID: \"4d7814f7c2796483e1d265f8bd35ad34\") " pod="kube-system/kube-scheduler-ip-172-31-24-171" Jul 2 08:57:54.287608 kubelet[2988]: I0702 08:57:54.287439 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b12b01a531d560fbf6543090236972f5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-171\" (UID: \"b12b01a531d560fbf6543090236972f5\") " pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:57:54.287608 kubelet[2988]: I0702 08:57:54.287484 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b12b01a531d560fbf6543090236972f5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-171\" (UID: \"b12b01a531d560fbf6543090236972f5\") " pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:57:54.287977 kubelet[2988]: I0702 08:57:54.287549 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:57:54.287977 kubelet[2988]: I0702 08:57:54.287594 2988 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:57:54.290297 kubelet[2988]: E0702 08:57:54.290249 2988 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-171?timeout=10s\": dial tcp 172.31.24.171:6443: connect: connection refused" interval="400ms" Jul 2 08:57:54.394453 kubelet[2988]: I0702 08:57:54.394320 2988 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-171" Jul 2 08:57:54.395294 kubelet[2988]: E0702 08:57:54.395249 2988 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.171:6443/api/v1/nodes\": dial tcp 172.31.24.171:6443: connect: connection refused" node="ip-172-31-24-171" Jul 2 08:57:54.546141 containerd[2129]: time="2024-07-02T08:57:54.545890395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-171,Uid:b12b01a531d560fbf6543090236972f5,Namespace:kube-system,Attempt:0,}" Jul 2 08:57:54.554273 containerd[2129]: time="2024-07-02T08:57:54.554192232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-171,Uid:ee4232a9cf01e66113de0e7b9b74f6d9,Namespace:kube-system,Attempt:0,}" Jul 2 08:57:54.556663 containerd[2129]: time="2024-07-02T08:57:54.556586636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-171,Uid:4d7814f7c2796483e1d265f8bd35ad34,Namespace:kube-system,Attempt:0,}" Jul 2 08:57:54.690869 kubelet[2988]: E0702 08:57:54.690769 2988 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-171?timeout=10s\": dial tcp 172.31.24.171:6443: connect: connection refused" interval="800ms" Jul 2 08:57:54.797411 kubelet[2988]: I0702 08:57:54.797274 2988 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-171" Jul 2 08:57:54.797867 kubelet[2988]: E0702 08:57:54.797749 2988 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.171:6443/api/v1/nodes\": dial tcp 172.31.24.171:6443: connect: connection refused" node="ip-172-31-24-171" Jul 2 08:57:54.892150 kubelet[2988]: W0702 08:57:54.892074 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.24.171:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.892338 kubelet[2988]: E0702 08:57:54.892164 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.171:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.901937 kubelet[2988]: W0702 08:57:54.901871 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.24.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.901937 kubelet[2988]: E0702 08:57:54.901935 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.983608 kubelet[2988]: W0702 08:57:54.983555 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.24.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:54.983821 kubelet[2988]: E0702 08:57:54.983627 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:55.054479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170387062.mount: Deactivated successfully. Jul 2 08:57:55.065764 containerd[2129]: time="2024-07-02T08:57:55.065687023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:57:55.067488 containerd[2129]: time="2024-07-02T08:57:55.067431675Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:57:55.069520 containerd[2129]: time="2024-07-02T08:57:55.069335586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:57:55.071740 containerd[2129]: time="2024-07-02T08:57:55.071403764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:57:55.078863 containerd[2129]: time="2024-07-02T08:57:55.078766287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:57:55.080230 containerd[2129]: time="2024-07-02T08:57:55.079108314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:57:55.084531 containerd[2129]: time="2024-07-02T08:57:55.084457287Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:57:55.088384 containerd[2129]: time="2024-07-02T08:57:55.088153118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.984947ms" Jul 2 08:57:55.092373 containerd[2129]: time="2024-07-02T08:57:55.091939535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:57:55.096243 containerd[2129]: time="2024-07-02T08:57:55.096108798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.076072ms" Jul 2 08:57:55.104804 containerd[2129]: time="2024-07-02T08:57:55.102737779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.410312ms" Jul 2 08:57:55.281188 containerd[2129]: time="2024-07-02T08:57:55.281019341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:57:55.281549 containerd[2129]: time="2024-07-02T08:57:55.281389510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:57:55.281807 containerd[2129]: time="2024-07-02T08:57:55.281637097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:57:55.282035 containerd[2129]: time="2024-07-02T08:57:55.281766654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:57:55.288546 containerd[2129]: time="2024-07-02T08:57:55.288385298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:57:55.288546 containerd[2129]: time="2024-07-02T08:57:55.288474358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:57:55.288546 containerd[2129]: time="2024-07-02T08:57:55.288506883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:57:55.289353 containerd[2129]: time="2024-07-02T08:57:55.288532047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:57:55.290571 containerd[2129]: time="2024-07-02T08:57:55.288303621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:57:55.290798 containerd[2129]: time="2024-07-02T08:57:55.290545489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:57:55.290798 containerd[2129]: time="2024-07-02T08:57:55.290584161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:57:55.290798 containerd[2129]: time="2024-07-02T08:57:55.290609830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:57:55.438183 containerd[2129]: time="2024-07-02T08:57:55.436696353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-171,Uid:ee4232a9cf01e66113de0e7b9b74f6d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f245498f1108d2fcfa16b7095a503630ea7b991e73d309126595f7129550d6d6\"" Jul 2 08:57:55.452729 containerd[2129]: time="2024-07-02T08:57:55.452567397Z" level=info msg="CreateContainer within sandbox \"f245498f1108d2fcfa16b7095a503630ea7b991e73d309126595f7129550d6d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:57:55.473177 containerd[2129]: time="2024-07-02T08:57:55.472994526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-171,Uid:b12b01a531d560fbf6543090236972f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecf59f1209ed08a9b4f6fe56ac93069631a1b5f6cfa516efafa77698f2d533ff\"" Jul 2 08:57:55.480347 containerd[2129]: time="2024-07-02T08:57:55.479456383Z" level=info msg="CreateContainer within sandbox \"ecf59f1209ed08a9b4f6fe56ac93069631a1b5f6cfa516efafa77698f2d533ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:57:55.492440 kubelet[2988]: E0702 08:57:55.492382 2988 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-171?timeout=10s\": dial tcp 172.31.24.171:6443: connect: connection refused" interval="1.6s" Jul 2 08:57:55.498241 containerd[2129]: time="2024-07-02T08:57:55.498086285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-171,Uid:4d7814f7c2796483e1d265f8bd35ad34,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc94343c8a6f372d35bae2573d60c8248bd053f2ccc9c9b8dd1bc62c35f3a88\"" Jul 2 08:57:55.502984 containerd[2129]: time="2024-07-02T08:57:55.502915507Z" level=info msg="CreateContainer within sandbox \"7fc94343c8a6f372d35bae2573d60c8248bd053f2ccc9c9b8dd1bc62c35f3a88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:57:55.506202 containerd[2129]: time="2024-07-02T08:57:55.506129837Z" level=info msg="CreateContainer within sandbox \"f245498f1108d2fcfa16b7095a503630ea7b991e73d309126595f7129550d6d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a\"" Jul 2 08:57:55.507838 containerd[2129]: time="2024-07-02T08:57:55.507720608Z" level=info msg="CreateContainer within sandbox \"ecf59f1209ed08a9b4f6fe56ac93069631a1b5f6cfa516efafa77698f2d533ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c01d6ac20e4fb8136c6d3b4045a23efaa149035117158fe3a4ea85929d48649\"" Jul 2 08:57:55.508628 containerd[2129]: time="2024-07-02T08:57:55.508487600Z" level=info msg="StartContainer for \"5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a\"" Jul 2 08:57:55.508814 containerd[2129]: time="2024-07-02T08:57:55.508509667Z" level=info msg="StartContainer for \"3c01d6ac20e4fb8136c6d3b4045a23efaa149035117158fe3a4ea85929d48649\"" Jul 2 08:57:55.536147 containerd[2129]: time="2024-07-02T08:57:55.536071157Z" level=info msg="CreateContainer within sandbox \"7fc94343c8a6f372d35bae2573d60c8248bd053f2ccc9c9b8dd1bc62c35f3a88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b\"" Jul 2 08:57:55.537179 containerd[2129]: time="2024-07-02T08:57:55.536988667Z" level=info msg="StartContainer for \"9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b\"" Jul 2 08:57:55.580494 kubelet[2988]: W0702 08:57:55.578613 2988 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.24.171:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-171&limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:55.581941 kubelet[2988]: E0702 08:57:55.580960 2988 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.171:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-171&limit=500&resourceVersion=0": dial tcp 172.31.24.171:6443: connect: connection refused Jul 2 08:57:55.604020 kubelet[2988]: I0702 08:57:55.603937 2988 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-171" Jul 2 08:57:55.605957 kubelet[2988]: E0702 08:57:55.604559 2988 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.171:6443/api/v1/nodes\": dial tcp 172.31.24.171:6443: connect: connection refused" node="ip-172-31-24-171" Jul 2 08:57:55.718362 containerd[2129]: time="2024-07-02T08:57:55.718082418Z" level=info msg="StartContainer for \"3c01d6ac20e4fb8136c6d3b4045a23efaa149035117158fe3a4ea85929d48649\" returns successfully" Jul 2 08:57:55.738311 containerd[2129]: time="2024-07-02T08:57:55.736959032Z" level=info msg="StartContainer for \"5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a\" returns successfully" Jul 2 08:57:55.783391 containerd[2129]: time="2024-07-02T08:57:55.781621292Z" level=info msg="StartContainer for \"9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b\" returns successfully" Jul 2 08:57:57.211638 kubelet[2988]: I0702 08:57:57.211587 2988 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-171" Jul 2 08:57:59.664906 kubelet[2988]: E0702 08:57:59.664844 2988 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-171\" not found" node="ip-172-31-24-171" Jul 2 08:57:59.758810 kubelet[2988]: I0702 08:57:59.757698 2988 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-24-171" Jul 2 08:58:00.066600 kubelet[2988]: I0702 08:58:00.066400 2988 apiserver.go:52] "Watching apiserver" Jul 2 08:58:00.088797 kubelet[2988]: I0702 08:58:00.087618 2988 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:58:01.548766 update_engine[2096]: I0702 08:58:01.547797 2096 update_attempter.cc:509] Updating boot flags... Jul 2 08:58:01.668710 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3278) Jul 2 08:58:02.196848 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3282) Jul 2 08:58:02.501998 systemd[1]: Reloading requested from client PID 3446 ('systemctl') (unit session-7.scope)... Jul 2 08:58:02.502030 systemd[1]: Reloading... Jul 2 08:58:02.839815 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3282) Jul 2 08:58:02.869608 zram_generator::config[3493]: No configuration found. Jul 2 08:58:03.210346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:03.386868 systemd[1]: Reloading finished in 882 ms. Jul 2 08:58:03.511715 kubelet[2988]: I0702 08:58:03.511566 2988 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:03.512162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:03.550220 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:58:03.556123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:03.567599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:03.843754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:03.862552 (kubelet)[3640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:58:03.967191 kubelet[3640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:03.968767 kubelet[3640]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:58:03.968767 kubelet[3640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:03.968767 kubelet[3640]: I0702 08:58:03.968048 3640 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:58:03.982271 kubelet[3640]: I0702 08:58:03.982226 3640 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:58:03.982271 kubelet[3640]: I0702 08:58:03.982271 3640 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:58:03.982656 kubelet[3640]: I0702 08:58:03.982626 3640 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:58:03.986813 kubelet[3640]: I0702 08:58:03.986715 3640 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:58:03.988850 kubelet[3640]: I0702 08:58:03.988690 3640 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:03.997662 kubelet[3640]: W0702 08:58:03.997620 3640 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 08:58:03.999087 kubelet[3640]: I0702 08:58:03.999032 3640 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:58:03.999899 kubelet[3640]: I0702 08:58:03.999855 3640 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:58:04.000257 kubelet[3640]: I0702 08:58:04.000220 3640 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:58:04.000431 kubelet[3640]: I0702 08:58:04.000279 3640 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:58:04.000431 kubelet[3640]: I0702 08:58:04.000300 3640 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:58:04.000431 kubelet[3640]: I0702 08:58:04.000364 3640 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:04.000579 kubelet[3640]: I0702 08:58:04.000537 3640 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:58:04.001000 kubelet[3640]: I0702 08:58:04.000563 3640 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:58:04.001000 kubelet[3640]: I0702 08:58:04.000922 3640 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:58:04.001000 kubelet[3640]: I0702 08:58:04.000965 3640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:58:04.011871 kubelet[3640]: I0702 08:58:04.008936 3640 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:58:04.011871 kubelet[3640]: I0702 08:58:04.010976 3640 server.go:1232] "Started kubelet" Jul 2 08:58:04.015116 kubelet[3640]: I0702 08:58:04.014836 3640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:58:04.019279 kubelet[3640]: I0702 08:58:04.019220 3640 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:58:04.031015 kubelet[3640]: I0702 08:58:04.030941 3640 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:58:04.042586 kubelet[3640]: I0702 08:58:04.042508 3640 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:58:04.046974 kubelet[3640]: I0702 08:58:04.046929 3640 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:58:04.048264 kubelet[3640]: I0702 08:58:04.048222 3640 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:58:04.054804 kubelet[3640]: I0702 08:58:04.054549 3640 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:58:04.056983 kubelet[3640]: I0702 08:58:04.056945 3640 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:58:04.062195 kubelet[3640]: E0702 08:58:04.061230 3640 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:58:04.062195 kubelet[3640]: E0702 08:58:04.061283 3640 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:58:04.081614 kubelet[3640]: I0702 08:58:04.081305 3640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:58:04.085642 kubelet[3640]: I0702 08:58:04.085605 3640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:58:04.086343 kubelet[3640]: I0702 08:58:04.085852 3640 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:58:04.086343 kubelet[3640]: I0702 08:58:04.085898 3640 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:58:04.086343 kubelet[3640]: E0702 08:58:04.085978 3640 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:58:04.156266 kubelet[3640]: I0702 08:58:04.156126 3640 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-171" Jul 2 08:58:04.174917 kubelet[3640]: I0702 08:58:04.174879 3640 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-24-171" Jul 2 08:58:04.176691 kubelet[3640]: I0702 08:58:04.176658 3640 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-24-171" Jul 2 08:58:04.186481 kubelet[3640]: E0702 08:58:04.186393 3640 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:58:04.330669 kubelet[3640]: I0702 08:58:04.330046 3640 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:58:04.330669 kubelet[3640]: I0702 08:58:04.330108 3640 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:58:04.330669 kubelet[3640]: I0702 08:58:04.330141 3640 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:04.330669 kubelet[3640]: I0702 08:58:04.330375 3640 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:58:04.330669 kubelet[3640]: I0702 08:58:04.330415 3640 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:58:04.330669 kubelet[3640]: I0702 08:58:04.330435 3640 policy_none.go:49] "None policy: Start" Jul 2 08:58:04.331927 kubelet[3640]: I0702 08:58:04.331886 3640 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:58:04.332005 kubelet[3640]: I0702 08:58:04.331937 3640 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:58:04.332345 kubelet[3640]: I0702 08:58:04.332307 3640 state_mem.go:75] "Updated machine memory state" Jul 2 08:58:04.334815 kubelet[3640]: I0702 08:58:04.334709 3640 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:58:04.338108 kubelet[3640]: I0702 08:58:04.337903 3640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:58:04.387497 kubelet[3640]: I0702 08:58:04.387441 3640 topology_manager.go:215] "Topology Admit Handler" podUID="b12b01a531d560fbf6543090236972f5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-171" Jul 2 08:58:04.387634 kubelet[3640]: I0702 08:58:04.387618 3640 topology_manager.go:215] "Topology Admit Handler" podUID="ee4232a9cf01e66113de0e7b9b74f6d9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:04.388100 kubelet[3640]: I0702 08:58:04.387718 3640 topology_manager.go:215] "Topology Admit Handler" podUID="4d7814f7c2796483e1d265f8bd35ad34" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-171" Jul 2 08:58:04.401052 kubelet[3640]: E0702 08:58:04.401014 3640 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-24-171\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:04.403181 kubelet[3640]: E0702 08:58:04.403089 3640 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-171\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:58:04.461622 kubelet[3640]: I0702 08:58:04.461375 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:04.461768 kubelet[3640]: I0702 08:58:04.461693 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d7814f7c2796483e1d265f8bd35ad34-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-171\" (UID: \"4d7814f7c2796483e1d265f8bd35ad34\") " pod="kube-system/kube-scheduler-ip-172-31-24-171" Jul 2 08:58:04.463345 kubelet[3640]: I0702 08:58:04.462679 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b12b01a531d560fbf6543090236972f5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-171\" (UID: \"b12b01a531d560fbf6543090236972f5\") " pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:58:04.463345 kubelet[3640]: I0702 08:58:04.462920 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b12b01a531d560fbf6543090236972f5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-171\" (UID: \"b12b01a531d560fbf6543090236972f5\") " pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:58:04.463345 kubelet[3640]: I0702 08:58:04.462999 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:04.463345 kubelet[3640]: I0702 08:58:04.463073 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:04.463345 kubelet[3640]: I0702 08:58:04.463127 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b12b01a531d560fbf6543090236972f5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-171\" (UID: \"b12b01a531d560fbf6543090236972f5\") " pod="kube-system/kube-apiserver-ip-172-31-24-171" Jul 2 08:58:04.463700 kubelet[3640]: I0702 08:58:04.463171 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:04.463700 kubelet[3640]: I0702 08:58:04.463216 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee4232a9cf01e66113de0e7b9b74f6d9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-171\" (UID: \"ee4232a9cf01e66113de0e7b9b74f6d9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-171" Jul 2 08:58:05.023371 kubelet[3640]: I0702 08:58:05.023303 3640 apiserver.go:52] "Watching apiserver" Jul 2 08:58:05.055169 kubelet[3640]: I0702 08:58:05.055072 3640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:58:05.268389 kubelet[3640]: I0702 08:58:05.268325 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-171" podStartSLOduration=5.268182176 podCreationTimestamp="2024-07-02 08:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:05.238553277 +0000 UTC m=+1.367155443" watchObservedRunningTime="2024-07-02 08:58:05.268182176 +0000 UTC m=+1.396784354" Jul 2 08:58:05.294468 kubelet[3640]: I0702 08:58:05.293971 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-171" podStartSLOduration=1.292316685 podCreationTimestamp="2024-07-02 08:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:05.270903396 +0000 UTC m=+1.399505598" watchObservedRunningTime="2024-07-02 08:58:05.292316685 +0000 UTC m=+1.420918863" Jul 2 08:58:05.340439 kubelet[3640]: I0702 08:58:05.340160 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-171" podStartSLOduration=3.340104732 podCreationTimestamp="2024-07-02 08:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:05.294638346 +0000 UTC m=+1.423240500" watchObservedRunningTime="2024-07-02 08:58:05.340104732 +0000 UTC m=+1.468706898" Jul 2 08:58:10.896055 sudo[2471]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:10.919622 sshd[2467]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:10.925084 systemd[1]: sshd@6-172.31.24.171:22-147.75.109.163:39834.service: Deactivated successfully. Jul 2 08:58:10.933381 systemd-logind[2093]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:58:10.935606 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:58:10.939286 systemd-logind[2093]: Removed session 7. Jul 2 08:58:15.740838 kubelet[3640]: I0702 08:58:15.738685 3640 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:58:15.741523 containerd[2129]: time="2024-07-02T08:58:15.740130426Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:58:15.747189 kubelet[3640]: I0702 08:58:15.744390 3640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:58:16.439559 kubelet[3640]: I0702 08:58:16.439501 3640 topology_manager.go:215] "Topology Admit Handler" podUID="8da1cd4c-136f-4ea1-b567-2f64aa7db20b" podNamespace="kube-system" podName="kube-proxy-25stn" Jul 2 08:58:16.545012 kubelet[3640]: I0702 08:58:16.544698 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjf8\" (UniqueName: \"kubernetes.io/projected/8da1cd4c-136f-4ea1-b567-2f64aa7db20b-kube-api-access-cxjf8\") pod \"kube-proxy-25stn\" (UID: \"8da1cd4c-136f-4ea1-b567-2f64aa7db20b\") " pod="kube-system/kube-proxy-25stn" Jul 2 08:58:16.545690 kubelet[3640]: I0702 08:58:16.545630 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8da1cd4c-136f-4ea1-b567-2f64aa7db20b-lib-modules\") pod \"kube-proxy-25stn\" (UID: \"8da1cd4c-136f-4ea1-b567-2f64aa7db20b\") " pod="kube-system/kube-proxy-25stn" Jul 2 08:58:16.546578 kubelet[3640]: I0702 08:58:16.546169 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8da1cd4c-136f-4ea1-b567-2f64aa7db20b-kube-proxy\") pod \"kube-proxy-25stn\" (UID: \"8da1cd4c-136f-4ea1-b567-2f64aa7db20b\") " pod="kube-system/kube-proxy-25stn" Jul 2 08:58:16.546578 kubelet[3640]: I0702 08:58:16.546324 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8da1cd4c-136f-4ea1-b567-2f64aa7db20b-xtables-lock\") pod \"kube-proxy-25stn\" (UID: \"8da1cd4c-136f-4ea1-b567-2f64aa7db20b\") " pod="kube-system/kube-proxy-25stn" Jul 2 08:58:16.754261 containerd[2129]: time="2024-07-02T08:58:16.754083128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25stn,Uid:8da1cd4c-136f-4ea1-b567-2f64aa7db20b,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:16.815635 containerd[2129]: time="2024-07-02T08:58:16.815134599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:16.815635 containerd[2129]: time="2024-07-02T08:58:16.815250805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:16.815635 containerd[2129]: time="2024-07-02T08:58:16.815293271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:16.815635 containerd[2129]: time="2024-07-02T08:58:16.815327608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:16.834417 kubelet[3640]: I0702 08:58:16.829954 3640 topology_manager.go:215] "Topology Admit Handler" podUID="d0f1f228-b5d4-4069-bc15-7cb48bc97e5d" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-gt6ts" Jul 2 08:58:16.849862 kubelet[3640]: I0702 08:58:16.849642 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwk6z\" (UniqueName: \"kubernetes.io/projected/d0f1f228-b5d4-4069-bc15-7cb48bc97e5d-kube-api-access-nwk6z\") pod \"tigera-operator-76c4974c85-gt6ts\" (UID: \"d0f1f228-b5d4-4069-bc15-7cb48bc97e5d\") " pod="tigera-operator/tigera-operator-76c4974c85-gt6ts" Jul 2 08:58:16.849862 kubelet[3640]: I0702 08:58:16.849714 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0f1f228-b5d4-4069-bc15-7cb48bc97e5d-var-lib-calico\") pod \"tigera-operator-76c4974c85-gt6ts\" (UID: \"d0f1f228-b5d4-4069-bc15-7cb48bc97e5d\") " pod="tigera-operator/tigera-operator-76c4974c85-gt6ts" Jul 2 08:58:16.922807 containerd[2129]: time="2024-07-02T08:58:16.922682751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25stn,Uid:8da1cd4c-136f-4ea1-b567-2f64aa7db20b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dfe903a3ee935bb64f6aecd9a4e3d0d0f50c019970945a2523a6aef04d0c457\"" Jul 2 08:58:16.928197 containerd[2129]: time="2024-07-02T08:58:16.928104025Z" level=info msg="CreateContainer within sandbox \"5dfe903a3ee935bb64f6aecd9a4e3d0d0f50c019970945a2523a6aef04d0c457\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:58:16.954142 containerd[2129]: time="2024-07-02T08:58:16.954078838Z" level=info msg="CreateContainer within sandbox \"5dfe903a3ee935bb64f6aecd9a4e3d0d0f50c019970945a2523a6aef04d0c457\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a7011651b3e591184076e9f4037bc9b5b2a6709637d2104a183cd3b8ad2b0ebc\"" Jul 2 08:58:16.956567 containerd[2129]: time="2024-07-02T08:58:16.956416622Z" level=info msg="StartContainer for \"a7011651b3e591184076e9f4037bc9b5b2a6709637d2104a183cd3b8ad2b0ebc\"" Jul 2 08:58:17.069078 containerd[2129]: time="2024-07-02T08:58:17.068966630Z" level=info msg="StartContainer for \"a7011651b3e591184076e9f4037bc9b5b2a6709637d2104a183cd3b8ad2b0ebc\" returns successfully" Jul 2 08:58:17.147982 containerd[2129]: time="2024-07-02T08:58:17.147444822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-gt6ts,Uid:d0f1f228-b5d4-4069-bc15-7cb48bc97e5d,Namespace:tigera-operator,Attempt:0,}" Jul 2 08:58:17.201255 containerd[2129]: time="2024-07-02T08:58:17.201068877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:17.201255 containerd[2129]: time="2024-07-02T08:58:17.201186319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:17.202211 kubelet[3640]: I0702 08:58:17.202156 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-25stn" podStartSLOduration=1.202103302 podCreationTimestamp="2024-07-02 08:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:17.201923608 +0000 UTC m=+13.330525786" watchObservedRunningTime="2024-07-02 08:58:17.202103302 +0000 UTC m=+13.330705480" Jul 2 08:58:17.207816 containerd[2129]: time="2024-07-02T08:58:17.201229001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:17.207816 containerd[2129]: time="2024-07-02T08:58:17.202042960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:17.319038 containerd[2129]: time="2024-07-02T08:58:17.318969214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-gt6ts,Uid:d0f1f228-b5d4-4069-bc15-7cb48bc97e5d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fc6075aaf8fe5cde5222c01c2622d26b13e210ff50fb77aad54ce1ffca78093a\"" Jul 2 08:58:17.323031 containerd[2129]: time="2024-07-02T08:58:17.322881718Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 08:58:17.677844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449450199.mount: Deactivated successfully. Jul 2 08:58:18.717883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058363885.mount: Deactivated successfully. Jul 2 08:58:19.303937 containerd[2129]: time="2024-07-02T08:58:19.303859581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:19.307879 containerd[2129]: time="2024-07-02T08:58:19.307736787Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:19.308971 containerd[2129]: time="2024-07-02T08:58:19.308264078Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473622" Jul 2 08:58:19.320354 containerd[2129]: time="2024-07-02T08:58:19.320293466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:19.321905 containerd[2129]: time="2024-07-02T08:58:19.321858364Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.99887929s" Jul 2 08:58:19.322114 containerd[2129]: time="2024-07-02T08:58:19.322082216Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 08:58:19.325513 containerd[2129]: time="2024-07-02T08:58:19.325425587Z" level=info msg="CreateContainer within sandbox \"fc6075aaf8fe5cde5222c01c2622d26b13e210ff50fb77aad54ce1ffca78093a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 08:58:19.345649 containerd[2129]: time="2024-07-02T08:58:19.345530523Z" level=info msg="CreateContainer within sandbox \"fc6075aaf8fe5cde5222c01c2622d26b13e210ff50fb77aad54ce1ffca78093a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a\"" Jul 2 08:58:19.348551 containerd[2129]: time="2024-07-02T08:58:19.347462264Z" level=info msg="StartContainer for \"4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a\"" Jul 2 08:58:19.446452 containerd[2129]: time="2024-07-02T08:58:19.446378918Z" level=info msg="StartContainer for \"4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a\" returns successfully" Jul 2 08:58:24.106840 kubelet[3640]: I0702 08:58:24.106755 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-gt6ts" podStartSLOduration=6.105382082 podCreationTimestamp="2024-07-02 08:58:16 +0000 UTC" firstStartedPulling="2024-07-02 08:58:17.321367005 +0000 UTC m=+13.449969171" lastFinishedPulling="2024-07-02 08:58:19.322678062 +0000 UTC m=+15.451280216" observedRunningTime="2024-07-02 08:58:20.204136173 +0000 UTC m=+16.332738339" watchObservedRunningTime="2024-07-02 08:58:24.106693127 +0000 UTC m=+20.235295305" Jul 2 08:58:24.921434 kubelet[3640]: I0702 08:58:24.921366 3640 topology_manager.go:215] "Topology Admit Handler" podUID="d83c43d1-89ea-489e-b9aa-2abd680bb2c9" podNamespace="calico-system" podName="calico-typha-6767455f65-gfrhp" Jul 2 08:58:25.003645 kubelet[3640]: I0702 08:58:25.002064 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqzg\" (UniqueName: \"kubernetes.io/projected/d83c43d1-89ea-489e-b9aa-2abd680bb2c9-kube-api-access-wkqzg\") pod \"calico-typha-6767455f65-gfrhp\" (UID: \"d83c43d1-89ea-489e-b9aa-2abd680bb2c9\") " pod="calico-system/calico-typha-6767455f65-gfrhp" Jul 2 08:58:25.003645 kubelet[3640]: I0702 08:58:25.002477 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d83c43d1-89ea-489e-b9aa-2abd680bb2c9-typha-certs\") pod \"calico-typha-6767455f65-gfrhp\" (UID: \"d83c43d1-89ea-489e-b9aa-2abd680bb2c9\") " pod="calico-system/calico-typha-6767455f65-gfrhp" Jul 2 08:58:25.004070 kubelet[3640]: I0702 08:58:25.003985 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d83c43d1-89ea-489e-b9aa-2abd680bb2c9-tigera-ca-bundle\") pod \"calico-typha-6767455f65-gfrhp\" (UID: \"d83c43d1-89ea-489e-b9aa-2abd680bb2c9\") " pod="calico-system/calico-typha-6767455f65-gfrhp" Jul 2 08:58:25.096322 kubelet[3640]: I0702 08:58:25.096270 3640 topology_manager.go:215] "Topology Admit Handler" podUID="dac81b0c-20c6-484b-aff5-a726dcb3b6bd" podNamespace="calico-system" podName="calico-node-dtgzz" Jul 2 08:58:25.207743 kubelet[3640]: I0702 08:58:25.206871 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-cni-bin-dir\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.207743 kubelet[3640]: I0702 08:58:25.206946 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-policysync\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.207743 kubelet[3640]: I0702 08:58:25.207040 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-lib-modules\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.207743 kubelet[3640]: I0702 08:58:25.207089 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-flexvol-driver-host\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.207743 kubelet[3640]: I0702 08:58:25.207136 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-tigera-ca-bundle\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209386 kubelet[3640]: I0702 08:58:25.207179 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-xtables-lock\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209386 kubelet[3640]: I0702 08:58:25.207226 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-node-certs\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209386 kubelet[3640]: I0702 08:58:25.207277 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-486jf\" (UniqueName: \"kubernetes.io/projected/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-kube-api-access-486jf\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209386 kubelet[3640]: I0702 08:58:25.207320 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-cni-log-dir\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209386 kubelet[3640]: I0702 08:58:25.207364 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-var-run-calico\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209669 kubelet[3640]: I0702 08:58:25.207412 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-var-lib-calico\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.209669 kubelet[3640]: I0702 08:58:25.207456 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dac81b0c-20c6-484b-aff5-a726dcb3b6bd-cni-net-dir\") pod \"calico-node-dtgzz\" (UID: \"dac81b0c-20c6-484b-aff5-a726dcb3b6bd\") " pod="calico-system/calico-node-dtgzz" Jul 2 08:58:25.234835 kubelet[3640]: I0702 08:58:25.232672 3640 topology_manager.go:215] "Topology Admit Handler" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" podNamespace="calico-system" podName="csi-node-driver-sxv4x" Jul 2 08:58:25.234835 kubelet[3640]: E0702 08:58:25.233111 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:25.240606 containerd[2129]: time="2024-07-02T08:58:25.240555047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6767455f65-gfrhp,Uid:d83c43d1-89ea-489e-b9aa-2abd680bb2c9,Namespace:calico-system,Attempt:0,}" Jul 2 08:58:25.309478 kubelet[3640]: I0702 08:58:25.309015 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7247e25-4e0b-429d-8736-192b57c4aae4-registration-dir\") pod \"csi-node-driver-sxv4x\" (UID: \"d7247e25-4e0b-429d-8736-192b57c4aae4\") " pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:25.309478 kubelet[3640]: I0702 08:58:25.309135 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7247e25-4e0b-429d-8736-192b57c4aae4-kubelet-dir\") pod \"csi-node-driver-sxv4x\" (UID: \"d7247e25-4e0b-429d-8736-192b57c4aae4\") " pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:25.309478 kubelet[3640]: I0702 08:58:25.309180 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7247e25-4e0b-429d-8736-192b57c4aae4-socket-dir\") pod \"csi-node-driver-sxv4x\" (UID: \"d7247e25-4e0b-429d-8736-192b57c4aae4\") " pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:25.309478 kubelet[3640]: I0702 08:58:25.309229 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hx6\" (UniqueName: \"kubernetes.io/projected/d7247e25-4e0b-429d-8736-192b57c4aae4-kube-api-access-j6hx6\") pod \"csi-node-driver-sxv4x\" (UID: \"d7247e25-4e0b-429d-8736-192b57c4aae4\") " pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:25.309478 kubelet[3640]: I0702 08:58:25.309410 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7247e25-4e0b-429d-8736-192b57c4aae4-varrun\") pod \"csi-node-driver-sxv4x\" (UID: \"d7247e25-4e0b-429d-8736-192b57c4aae4\") " pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:25.328903 kubelet[3640]: E0702 08:58:25.327861 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.328903 kubelet[3640]: W0702 08:58:25.327907 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.328903 kubelet[3640]: E0702 08:58:25.327952 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.333961 kubelet[3640]: E0702 08:58:25.333905 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.333961 kubelet[3640]: W0702 08:58:25.333946 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.335100 kubelet[3640]: E0702 08:58:25.333985 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.337153 kubelet[3640]: E0702 08:58:25.336952 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.337153 kubelet[3640]: W0702 08:58:25.336990 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.337153 kubelet[3640]: E0702 08:58:25.337028 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.341248 kubelet[3640]: E0702 08:58:25.340944 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.341248 kubelet[3640]: W0702 08:58:25.340977 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.341248 kubelet[3640]: E0702 08:58:25.341052 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.349421 kubelet[3640]: E0702 08:58:25.348074 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.350412 kubelet[3640]: W0702 08:58:25.350155 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.350412 kubelet[3640]: E0702 08:58:25.350226 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.372530 kubelet[3640]: E0702 08:58:25.371980 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.372530 kubelet[3640]: W0702 08:58:25.372040 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.372530 kubelet[3640]: E0702 08:58:25.372131 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.376535 kubelet[3640]: E0702 08:58:25.373304 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.376535 kubelet[3640]: W0702 08:58:25.373330 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.376535 kubelet[3640]: E0702 08:58:25.373535 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.376535 kubelet[3640]: E0702 08:58:25.375962 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.376535 kubelet[3640]: W0702 08:58:25.376107 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.376535 kubelet[3640]: E0702 08:58:25.376194 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.385668 kubelet[3640]: E0702 08:58:25.381930 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.385668 kubelet[3640]: W0702 08:58:25.381962 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.385668 kubelet[3640]: E0702 08:58:25.382650 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.387004 kubelet[3640]: E0702 08:58:25.385992 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.387004 kubelet[3640]: W0702 08:58:25.386035 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.387004 kubelet[3640]: E0702 08:58:25.386124 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.395919 kubelet[3640]: E0702 08:58:25.395652 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.395919 kubelet[3640]: W0702 08:58:25.395685 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.397164 kubelet[3640]: E0702 08:58:25.396556 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.399163 kubelet[3640]: E0702 08:58:25.398875 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.399163 kubelet[3640]: W0702 08:58:25.398944 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.400355 kubelet[3640]: E0702 08:58:25.399471 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.405057 kubelet[3640]: E0702 08:58:25.403982 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.405057 kubelet[3640]: W0702 08:58:25.404022 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.411051 kubelet[3640]: E0702 08:58:25.411002 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.413169 kubelet[3640]: E0702 08:58:25.412911 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.413169 kubelet[3640]: W0702 08:58:25.412938 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.413169 kubelet[3640]: E0702 08:58:25.412984 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.417817 kubelet[3640]: E0702 08:58:25.416176 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.417817 kubelet[3640]: W0702 08:58:25.416216 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.417817 kubelet[3640]: E0702 08:58:25.417290 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.424636 kubelet[3640]: E0702 08:58:25.421483 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.424636 kubelet[3640]: W0702 08:58:25.421627 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.424636 kubelet[3640]: E0702 08:58:25.421673 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.424636 kubelet[3640]: E0702 08:58:25.423130 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.424636 kubelet[3640]: W0702 08:58:25.423303 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.424636 kubelet[3640]: E0702 08:58:25.423494 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.426645 kubelet[3640]: E0702 08:58:25.426586 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.426852 kubelet[3640]: W0702 08:58:25.426810 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.427836 kubelet[3640]: E0702 08:58:25.427464 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.427836 kubelet[3640]: W0702 08:58:25.427511 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.427997 kubelet[3640]: E0702 08:58:25.427886 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.429812 kubelet[3640]: E0702 08:58:25.428533 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.432623 kubelet[3640]: E0702 08:58:25.431750 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.432623 kubelet[3640]: W0702 08:58:25.431843 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.432623 kubelet[3640]: E0702 08:58:25.432301 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.432906 kubelet[3640]: E0702 08:58:25.432857 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.432906 kubelet[3640]: W0702 08:58:25.432878 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.434810 kubelet[3640]: E0702 08:58:25.433003 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.434810 kubelet[3640]: E0702 08:58:25.433479 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.434810 kubelet[3640]: W0702 08:58:25.433500 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.434810 kubelet[3640]: E0702 08:58:25.433644 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.434810 kubelet[3640]: E0702 08:58:25.434161 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.434810 kubelet[3640]: W0702 08:58:25.434184 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.434810 kubelet[3640]: E0702 08:58:25.434264 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.436337 kubelet[3640]: E0702 08:58:25.436082 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.436337 kubelet[3640]: W0702 08:58:25.436139 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.436337 kubelet[3640]: E0702 08:58:25.436243 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.437322 kubelet[3640]: E0702 08:58:25.437251 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.437322 kubelet[3640]: W0702 08:58:25.437315 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.437475 kubelet[3640]: E0702 08:58:25.437437 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.438834 kubelet[3640]: E0702 08:58:25.438742 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.438834 kubelet[3640]: W0702 08:58:25.438806 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.439282 kubelet[3640]: E0702 08:58:25.439251 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.440441 kubelet[3640]: E0702 08:58:25.440198 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.440441 kubelet[3640]: W0702 08:58:25.440234 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.440441 kubelet[3640]: E0702 08:58:25.440379 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.441906 containerd[2129]: time="2024-07-02T08:58:25.441716707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:25.442261 kubelet[3640]: E0702 08:58:25.442222 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.442261 kubelet[3640]: W0702 08:58:25.442256 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.442470 containerd[2129]: time="2024-07-02T08:58:25.441853995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:25.443161 kubelet[3640]: E0702 08:58:25.442574 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.443437 containerd[2129]: time="2024-07-02T08:58:25.442817681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:25.443437 containerd[2129]: time="2024-07-02T08:58:25.442920068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:25.443610 kubelet[3640]: E0702 08:58:25.443577 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.443686 kubelet[3640]: W0702 08:58:25.443605 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.443945 kubelet[3640]: E0702 08:58:25.443853 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.445288 kubelet[3640]: E0702 08:58:25.444982 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.445288 kubelet[3640]: W0702 08:58:25.445025 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.445288 kubelet[3640]: E0702 08:58:25.445234 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.447966 kubelet[3640]: E0702 08:58:25.447919 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.447966 kubelet[3640]: W0702 08:58:25.447958 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.448275 kubelet[3640]: E0702 08:58:25.448101 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.450142 kubelet[3640]: E0702 08:58:25.450094 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.450142 kubelet[3640]: W0702 08:58:25.450132 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.450432 kubelet[3640]: E0702 08:58:25.450276 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.450487 kubelet[3640]: E0702 08:58:25.450469 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.452161 kubelet[3640]: W0702 08:58:25.450484 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.452161 kubelet[3640]: E0702 08:58:25.450836 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.452161 kubelet[3640]: W0702 08:58:25.450853 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.452161 kubelet[3640]: E0702 08:58:25.451153 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.452161 kubelet[3640]: W0702 08:58:25.451170 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.452161 kubelet[3640]: E0702 08:58:25.451368 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.452161 kubelet[3640]: E0702 08:58:25.451411 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.452161 kubelet[3640]: E0702 08:58:25.451450 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.452161 kubelet[3640]: W0702 08:58:25.451464 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.452161 kubelet[3640]: E0702 08:58:25.451492 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.452716 kubelet[3640]: E0702 08:58:25.451516 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.452716 kubelet[3640]: E0702 08:58:25.451768 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.452716 kubelet[3640]: W0702 08:58:25.451837 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.452716 kubelet[3640]: E0702 08:58:25.452579 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.453696 kubelet[3640]: E0702 08:58:25.453070 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.453696 kubelet[3640]: W0702 08:58:25.453095 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.453696 kubelet[3640]: E0702 08:58:25.453142 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.453696 kubelet[3640]: E0702 08:58:25.453535 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.453696 kubelet[3640]: W0702 08:58:25.453554 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.455687 kubelet[3640]: E0702 08:58:25.454423 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.455687 kubelet[3640]: E0702 08:58:25.455158 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.455687 kubelet[3640]: W0702 08:58:25.455182 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.455687 kubelet[3640]: E0702 08:58:25.455512 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.455687 kubelet[3640]: W0702 08:58:25.455529 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.456324 kubelet[3640]: E0702 08:58:25.456297 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.456502 kubelet[3640]: W0702 08:58:25.456477 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.456715 kubelet[3640]: E0702 08:58:25.456403 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.456715 kubelet[3640]: E0702 08:58:25.456425 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.457220 kubelet[3640]: E0702 08:58:25.456978 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.458236 kubelet[3640]: E0702 08:58:25.458114 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.460676 kubelet[3640]: W0702 08:58:25.458600 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.460676 kubelet[3640]: E0702 08:58:25.458694 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.460676 kubelet[3640]: E0702 08:58:25.460440 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.460676 kubelet[3640]: W0702 08:58:25.460466 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.460676 kubelet[3640]: E0702 08:58:25.460513 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.461625 kubelet[3640]: E0702 08:58:25.461602 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.461766 kubelet[3640]: W0702 08:58:25.461742 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.462211 kubelet[3640]: E0702 08:58:25.462191 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.462321 kubelet[3640]: W0702 08:58:25.462300 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.462432 kubelet[3640]: E0702 08:58:25.462413 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.462554 kubelet[3640]: E0702 08:58:25.462537 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.463982 kubelet[3640]: E0702 08:58:25.463677 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.463982 kubelet[3640]: W0702 08:58:25.463705 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.463982 kubelet[3640]: E0702 08:58:25.463737 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.513734 kubelet[3640]: E0702 08:58:25.513701 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:25.513968 kubelet[3640]: W0702 08:58:25.513941 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:25.514128 kubelet[3640]: E0702 08:58:25.514105 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:25.599640 containerd[2129]: time="2024-07-02T08:58:25.599576831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6767455f65-gfrhp,Uid:d83c43d1-89ea-489e-b9aa-2abd680bb2c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3dac13bee5e97521c45de5298c5347050c8c2f7e35deeaaa3a9b9ef32c793428\"" Jul 2 08:58:25.603148 containerd[2129]: time="2024-07-02T08:58:25.603102489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 08:58:25.708924 containerd[2129]: time="2024-07-02T08:58:25.707975355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dtgzz,Uid:dac81b0c-20c6-484b-aff5-a726dcb3b6bd,Namespace:calico-system,Attempt:0,}" Jul 2 08:58:25.766626 containerd[2129]: time="2024-07-02T08:58:25.766160094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:25.766626 containerd[2129]: time="2024-07-02T08:58:25.766270501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:25.766626 containerd[2129]: time="2024-07-02T08:58:25.766313710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:25.766626 containerd[2129]: time="2024-07-02T08:58:25.766347639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:25.879444 containerd[2129]: time="2024-07-02T08:58:25.879238954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dtgzz,Uid:dac81b0c-20c6-484b-aff5-a726dcb3b6bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\"" Jul 2 08:58:27.089423 kubelet[3640]: E0702 08:58:27.087524 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:28.525561 containerd[2129]: time="2024-07-02T08:58:28.525506609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:28.528867 containerd[2129]: time="2024-07-02T08:58:28.528317249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 08:58:28.530529 containerd[2129]: time="2024-07-02T08:58:28.529641139Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:28.550765 containerd[2129]: time="2024-07-02T08:58:28.550373688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:28.562687 containerd[2129]: time="2024-07-02T08:58:28.562587512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.958783057s" Jul 2 08:58:28.562687 containerd[2129]: time="2024-07-02T08:58:28.562676200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 08:58:28.572626 containerd[2129]: time="2024-07-02T08:58:28.572531854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 08:58:28.624135 containerd[2129]: time="2024-07-02T08:58:28.624081044Z" level=info msg="CreateContainer within sandbox \"3dac13bee5e97521c45de5298c5347050c8c2f7e35deeaaa3a9b9ef32c793428\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:58:28.661695 containerd[2129]: time="2024-07-02T08:58:28.661569335Z" level=info msg="CreateContainer within sandbox \"3dac13bee5e97521c45de5298c5347050c8c2f7e35deeaaa3a9b9ef32c793428\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"27f627529f3288cd3a06d978bd95c88920c9a18e7e614036f2546a9e5284a0ad\"" Jul 2 08:58:28.666671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776799574.mount: Deactivated successfully. Jul 2 08:58:28.668762 containerd[2129]: time="2024-07-02T08:58:28.667067075Z" level=info msg="StartContainer for \"27f627529f3288cd3a06d978bd95c88920c9a18e7e614036f2546a9e5284a0ad\"" Jul 2 08:58:28.869929 containerd[2129]: time="2024-07-02T08:58:28.867425437Z" level=info msg="StartContainer for \"27f627529f3288cd3a06d978bd95c88920c9a18e7e614036f2546a9e5284a0ad\" returns successfully" Jul 2 08:58:29.087191 kubelet[3640]: E0702 08:58:29.086656 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:29.249451 kubelet[3640]: E0702 08:58:29.248905 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.249451 kubelet[3640]: W0702 08:58:29.249006 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.249451 kubelet[3640]: E0702 08:58:29.249046 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.250712 kubelet[3640]: E0702 08:58:29.250682 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.251002 kubelet[3640]: W0702 08:58:29.250880 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.251002 kubelet[3640]: E0702 08:58:29.250922 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.252072 kubelet[3640]: E0702 08:58:29.251921 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.252072 kubelet[3640]: W0702 08:58:29.251949 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.252072 kubelet[3640]: E0702 08:58:29.252004 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.252979 kubelet[3640]: E0702 08:58:29.252808 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.252979 kubelet[3640]: W0702 08:58:29.252838 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.252979 kubelet[3640]: E0702 08:58:29.252870 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.254258 kubelet[3640]: E0702 08:58:29.254113 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.254258 kubelet[3640]: W0702 08:58:29.254143 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.254258 kubelet[3640]: E0702 08:58:29.254202 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.256005 kubelet[3640]: E0702 08:58:29.255753 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.256005 kubelet[3640]: W0702 08:58:29.255811 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.256005 kubelet[3640]: E0702 08:58:29.255849 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.257087 kubelet[3640]: E0702 08:58:29.256769 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.257087 kubelet[3640]: W0702 08:58:29.256886 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.257087 kubelet[3640]: E0702 08:58:29.256920 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.258203 kubelet[3640]: E0702 08:58:29.257967 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.258203 kubelet[3640]: W0702 08:58:29.257996 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.258203 kubelet[3640]: E0702 08:58:29.258031 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.258946 kubelet[3640]: E0702 08:58:29.258854 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.259335 kubelet[3640]: W0702 08:58:29.258897 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.259335 kubelet[3640]: E0702 08:58:29.259160 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.260347 kubelet[3640]: E0702 08:58:29.259988 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.260347 kubelet[3640]: W0702 08:58:29.260018 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.260347 kubelet[3640]: E0702 08:58:29.260053 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.265920 kubelet[3640]: E0702 08:58:29.265860 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.266705 kubelet[3640]: W0702 08:58:29.266176 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.266705 kubelet[3640]: E0702 08:58:29.266255 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.268445 kubelet[3640]: E0702 08:58:29.268376 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.268445 kubelet[3640]: W0702 08:58:29.268415 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.268445 kubelet[3640]: E0702 08:58:29.268452 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.270001 kubelet[3640]: E0702 08:58:29.269931 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.270001 kubelet[3640]: W0702 08:58:29.269971 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.270001 kubelet[3640]: E0702 08:58:29.270009 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.270610 kubelet[3640]: E0702 08:58:29.270559 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.270610 kubelet[3640]: W0702 08:58:29.270583 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.270728 kubelet[3640]: E0702 08:58:29.270617 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.271687 kubelet[3640]: E0702 08:58:29.271316 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.271687 kubelet[3640]: W0702 08:58:29.271352 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.271687 kubelet[3640]: E0702 08:58:29.271386 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.284550 kubelet[3640]: E0702 08:58:29.284492 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.284550 kubelet[3640]: W0702 08:58:29.284529 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.284746 kubelet[3640]: E0702 08:58:29.284568 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.285735 kubelet[3640]: E0702 08:58:29.285445 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.285735 kubelet[3640]: W0702 08:58:29.285477 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.285735 kubelet[3640]: E0702 08:58:29.285523 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.286442 kubelet[3640]: E0702 08:58:29.286403 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.286623 kubelet[3640]: W0702 08:58:29.286541 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.286623 kubelet[3640]: E0702 08:58:29.286584 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.287804 kubelet[3640]: E0702 08:58:29.287556 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.287804 kubelet[3640]: W0702 08:58:29.287593 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.287804 kubelet[3640]: E0702 08:58:29.287629 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.288303 kubelet[3640]: E0702 08:58:29.288263 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.288448 kubelet[3640]: W0702 08:58:29.288407 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.288707 kubelet[3640]: E0702 08:58:29.288635 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.290386 kubelet[3640]: E0702 08:58:29.290320 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.290386 kubelet[3640]: W0702 08:58:29.290351 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.290874 kubelet[3640]: E0702 08:58:29.290659 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.291126 kubelet[3640]: E0702 08:58:29.291104 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.291367 kubelet[3640]: W0702 08:58:29.291242 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.291367 kubelet[3640]: E0702 08:58:29.291328 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.291998 kubelet[3640]: E0702 08:58:29.291825 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.291998 kubelet[3640]: W0702 08:58:29.291847 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.291998 kubelet[3640]: E0702 08:58:29.291960 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.293477 kubelet[3640]: E0702 08:58:29.292327 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.293938 kubelet[3640]: W0702 08:58:29.293676 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.294667 kubelet[3640]: E0702 08:58:29.294632 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.295175 kubelet[3640]: W0702 08:58:29.294925 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.295462 kubelet[3640]: E0702 08:58:29.295399 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.295462 kubelet[3640]: E0702 08:58:29.295447 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.296553 kubelet[3640]: E0702 08:58:29.296072 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.296553 kubelet[3640]: W0702 08:58:29.296097 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.296553 kubelet[3640]: E0702 08:58:29.296161 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.298125 kubelet[3640]: E0702 08:58:29.297650 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.298125 kubelet[3640]: W0702 08:58:29.297685 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.298125 kubelet[3640]: E0702 08:58:29.297738 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.298379 kubelet[3640]: E0702 08:58:29.298222 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.298379 kubelet[3640]: W0702 08:58:29.298245 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.298379 kubelet[3640]: E0702 08:58:29.298275 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.299460 kubelet[3640]: E0702 08:58:29.299178 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.299460 kubelet[3640]: W0702 08:58:29.299219 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.299460 kubelet[3640]: E0702 08:58:29.299255 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.300890 kubelet[3640]: E0702 08:58:29.300262 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.300890 kubelet[3640]: W0702 08:58:29.300322 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.300890 kubelet[3640]: E0702 08:58:29.300366 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.302472 kubelet[3640]: E0702 08:58:29.302237 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.302472 kubelet[3640]: W0702 08:58:29.302268 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.302472 kubelet[3640]: E0702 08:58:29.302319 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.304690 kubelet[3640]: E0702 08:58:29.303907 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.305142 kubelet[3640]: W0702 08:58:29.304712 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.305142 kubelet[3640]: E0702 08:58:29.304768 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:29.305843 kubelet[3640]: E0702 08:58:29.305270 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:29.305843 kubelet[3640]: W0702 08:58:29.305299 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:29.305843 kubelet[3640]: E0702 08:58:29.305330 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.091659 containerd[2129]: time="2024-07-02T08:58:30.089415371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:30.097404 containerd[2129]: time="2024-07-02T08:58:30.097278473Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:30.097404 containerd[2129]: time="2024-07-02T08:58:30.097396312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 08:58:30.112580 containerd[2129]: time="2024-07-02T08:58:30.112479810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:30.116143 containerd[2129]: time="2024-07-02T08:58:30.116068499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.543469184s" Jul 2 08:58:30.116143 containerd[2129]: time="2024-07-02T08:58:30.116136009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 08:58:30.120970 containerd[2129]: time="2024-07-02T08:58:30.120884670Z" level=info msg="CreateContainer within sandbox \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:58:30.143418 containerd[2129]: time="2024-07-02T08:58:30.143102134Z" level=info msg="CreateContainer within sandbox \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5f8b0850f46611129e43129ff6db982e7372b4d59877965fad3a1d640f88042\"" Jul 2 08:58:30.144189 containerd[2129]: time="2024-07-02T08:58:30.144072759Z" level=info msg="StartContainer for \"a5f8b0850f46611129e43129ff6db982e7372b4d59877965fad3a1d640f88042\"" Jul 2 08:58:30.248582 kubelet[3640]: I0702 08:58:30.248504 3640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:58:30.278207 kubelet[3640]: E0702 08:58:30.278159 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.278207 kubelet[3640]: W0702 08:58:30.278195 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.278380 kubelet[3640]: E0702 08:58:30.278231 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.279901 kubelet[3640]: E0702 08:58:30.279852 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.279901 kubelet[3640]: W0702 08:58:30.279891 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.280104 kubelet[3640]: E0702 08:58:30.279927 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.280932 kubelet[3640]: E0702 08:58:30.280882 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.281166 kubelet[3640]: W0702 08:58:30.281091 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.281166 kubelet[3640]: E0702 08:58:30.281140 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.283159 kubelet[3640]: E0702 08:58:30.283111 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.283159 kubelet[3640]: W0702 08:58:30.283148 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.283368 kubelet[3640]: E0702 08:58:30.283187 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.285122 kubelet[3640]: E0702 08:58:30.285050 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.285122 kubelet[3640]: W0702 08:58:30.285101 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.285422 kubelet[3640]: E0702 08:58:30.285139 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.287560 kubelet[3640]: E0702 08:58:30.287516 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.287560 kubelet[3640]: W0702 08:58:30.287549 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.287756 kubelet[3640]: E0702 08:58:30.287582 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.289207 kubelet[3640]: E0702 08:58:30.289169 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.289207 kubelet[3640]: W0702 08:58:30.289200 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.289344 kubelet[3640]: E0702 08:58:30.289231 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.290258 kubelet[3640]: E0702 08:58:30.290208 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.290258 kubelet[3640]: W0702 08:58:30.290239 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.290437 kubelet[3640]: E0702 08:58:30.290268 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.292010 kubelet[3640]: E0702 08:58:30.291956 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.292010 kubelet[3640]: W0702 08:58:30.291995 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.292172 kubelet[3640]: E0702 08:58:30.292032 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.294131 kubelet[3640]: E0702 08:58:30.294064 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.294131 kubelet[3640]: W0702 08:58:30.294100 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.294350 kubelet[3640]: E0702 08:58:30.294149 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.296239 kubelet[3640]: E0702 08:58:30.295968 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.296239 kubelet[3640]: W0702 08:58:30.296003 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.296239 kubelet[3640]: E0702 08:58:30.296160 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.298287 kubelet[3640]: E0702 08:58:30.298241 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.298287 kubelet[3640]: W0702 08:58:30.298277 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.298924 kubelet[3640]: E0702 08:58:30.298319 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.299489 kubelet[3640]: E0702 08:58:30.299424 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.299489 kubelet[3640]: W0702 08:58:30.299483 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.299630 kubelet[3640]: E0702 08:58:30.299549 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.301235 kubelet[3640]: E0702 08:58:30.301164 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.301235 kubelet[3640]: W0702 08:58:30.301202 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.301235 kubelet[3640]: E0702 08:58:30.301238 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.303113 kubelet[3640]: E0702 08:58:30.302822 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.303113 kubelet[3640]: W0702 08:58:30.302861 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.303113 kubelet[3640]: E0702 08:58:30.302898 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.305159 kubelet[3640]: E0702 08:58:30.305073 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.305662 kubelet[3640]: W0702 08:58:30.305406 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.305662 kubelet[3640]: E0702 08:58:30.305587 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.307024 kubelet[3640]: E0702 08:58:30.306957 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.307401 kubelet[3640]: W0702 08:58:30.307119 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.307401 kubelet[3640]: E0702 08:58:30.307299 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.309951 kubelet[3640]: E0702 08:58:30.309557 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.309951 kubelet[3640]: W0702 08:58:30.309590 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.309951 kubelet[3640]: E0702 08:58:30.309647 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.310959 kubelet[3640]: E0702 08:58:30.310756 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.310959 kubelet[3640]: W0702 08:58:30.310918 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.312043 kubelet[3640]: E0702 08:58:30.311684 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.312731 kubelet[3640]: E0702 08:58:30.312338 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.312731 kubelet[3640]: W0702 08:58:30.312476 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.313546 kubelet[3640]: E0702 08:58:30.313168 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.314253 kubelet[3640]: E0702 08:58:30.314181 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.314253 kubelet[3640]: W0702 08:58:30.314212 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.314921 kubelet[3640]: E0702 08:58:30.314553 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.316334 kubelet[3640]: E0702 08:58:30.316282 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.317999 kubelet[3640]: W0702 08:58:30.317933 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.319408 kubelet[3640]: E0702 08:58:30.319377 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.319924 kubelet[3640]: W0702 08:58:30.319696 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.321770 kubelet[3640]: E0702 08:58:30.321508 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.321770 kubelet[3640]: W0702 08:58:30.321544 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.321770 kubelet[3640]: E0702 08:58:30.321580 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.322985 kubelet[3640]: E0702 08:58:30.322382 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.322985 kubelet[3640]: W0702 08:58:30.322410 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.322985 kubelet[3640]: E0702 08:58:30.322446 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.324208 kubelet[3640]: E0702 08:58:30.324177 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.325425 kubelet[3640]: W0702 08:58:30.325385 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.325590 kubelet[3640]: E0702 08:58:30.325570 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.328303 kubelet[3640]: E0702 08:58:30.327012 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.328303 kubelet[3640]: E0702 08:58:30.327689 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.328303 kubelet[3640]: W0702 08:58:30.327916 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.328303 kubelet[3640]: E0702 08:58:30.327945 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.330215 kubelet[3640]: E0702 08:58:30.327720 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.330614 kubelet[3640]: E0702 08:58:30.330529 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.330614 kubelet[3640]: W0702 08:58:30.330562 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.330614 kubelet[3640]: E0702 08:58:30.330600 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.332279 kubelet[3640]: E0702 08:58:30.332238 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.332279 kubelet[3640]: W0702 08:58:30.332269 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.332569 kubelet[3640]: E0702 08:58:30.332350 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.332706 kubelet[3640]: E0702 08:58:30.332655 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.332706 kubelet[3640]: W0702 08:58:30.332681 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.332958 kubelet[3640]: E0702 08:58:30.332709 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.333744 kubelet[3640]: E0702 08:58:30.333676 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.333744 kubelet[3640]: W0702 08:58:30.333704 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.334150 kubelet[3640]: E0702 08:58:30.333866 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.335026 kubelet[3640]: E0702 08:58:30.334538 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.335026 kubelet[3640]: W0702 08:58:30.334561 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.335026 kubelet[3640]: E0702 08:58:30.334617 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.335589 kubelet[3640]: E0702 08:58:30.335564 3640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:58:30.335717 kubelet[3640]: W0702 08:58:30.335694 3640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:58:30.335896 kubelet[3640]: E0702 08:58:30.335842 3640 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:58:30.340403 containerd[2129]: time="2024-07-02T08:58:30.340344435Z" level=info msg="StartContainer for \"a5f8b0850f46611129e43129ff6db982e7372b4d59877965fad3a1d640f88042\" returns successfully" Jul 2 08:58:30.586484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5f8b0850f46611129e43129ff6db982e7372b4d59877965fad3a1d640f88042-rootfs.mount: Deactivated successfully. Jul 2 08:58:30.654041 containerd[2129]: time="2024-07-02T08:58:30.653332645Z" level=info msg="shim disconnected" id=a5f8b0850f46611129e43129ff6db982e7372b4d59877965fad3a1d640f88042 namespace=k8s.io Jul 2 08:58:30.655625 containerd[2129]: time="2024-07-02T08:58:30.654038442Z" level=warning msg="cleaning up after shim disconnected" id=a5f8b0850f46611129e43129ff6db982e7372b4d59877965fad3a1d640f88042 namespace=k8s.io Jul 2 08:58:30.655625 containerd[2129]: time="2024-07-02T08:58:30.654066560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:58:31.086997 kubelet[3640]: E0702 08:58:31.086943 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:31.256392 containerd[2129]: time="2024-07-02T08:58:31.255472725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 08:58:31.290259 kubelet[3640]: I0702 08:58:31.289511 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6767455f65-gfrhp" podStartSLOduration=4.326853302 podCreationTimestamp="2024-07-02 08:58:24 +0000 UTC" firstStartedPulling="2024-07-02 08:58:25.602514363 +0000 UTC m=+21.731116529" lastFinishedPulling="2024-07-02 08:58:28.565065935 +0000 UTC m=+24.693668113" observedRunningTime="2024-07-02 08:58:29.264044082 +0000 UTC m=+25.392646272" watchObservedRunningTime="2024-07-02 08:58:31.289404886 +0000 UTC m=+27.418007064" Jul 2 08:58:33.086538 kubelet[3640]: E0702 08:58:33.086491 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:35.089056 kubelet[3640]: E0702 08:58:35.087027 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:35.707940 containerd[2129]: time="2024-07-02T08:58:35.707682060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:35.709282 containerd[2129]: time="2024-07-02T08:58:35.709227280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 08:58:35.710072 containerd[2129]: time="2024-07-02T08:58:35.709983886Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:35.715149 containerd[2129]: time="2024-07-02T08:58:35.715059447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:35.716907 containerd[2129]: time="2024-07-02T08:58:35.716695180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.461155799s" Jul 2 08:58:35.716907 containerd[2129]: time="2024-07-02T08:58:35.716752317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 08:58:35.719951 containerd[2129]: time="2024-07-02T08:58:35.719852170Z" level=info msg="CreateContainer within sandbox \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 08:58:35.738863 containerd[2129]: time="2024-07-02T08:58:35.738711783Z" level=info msg="CreateContainer within sandbox \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de\"" Jul 2 08:58:35.739927 containerd[2129]: time="2024-07-02T08:58:35.739865151Z" level=info msg="StartContainer for \"34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de\"" Jul 2 08:58:35.799472 systemd[1]: run-containerd-runc-k8s.io-34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de-runc.6T2RAz.mount: Deactivated successfully. Jul 2 08:58:35.853593 containerd[2129]: time="2024-07-02T08:58:35.853334867Z" level=info msg="StartContainer for \"34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de\" returns successfully" Jul 2 08:58:36.487151 containerd[2129]: time="2024-07-02T08:58:36.487071113Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:58:36.528511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de-rootfs.mount: Deactivated successfully. Jul 2 08:58:36.536943 kubelet[3640]: I0702 08:58:36.534246 3640 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 08:58:36.580813 kubelet[3640]: I0702 08:58:36.577856 3640 topology_manager.go:215] "Topology Admit Handler" podUID="0a48d9a9-af65-44c5-a924-37f85c1c6d43" podNamespace="kube-system" podName="coredns-5dd5756b68-h8wml" Jul 2 08:58:36.580813 kubelet[3640]: I0702 08:58:36.578561 3640 topology_manager.go:215] "Topology Admit Handler" podUID="7e42c34c-834d-42ba-9014-171b25b9d834" podNamespace="calico-system" podName="calico-kube-controllers-5f9584999-qbdw8" Jul 2 08:58:36.583990 kubelet[3640]: I0702 08:58:36.583197 3640 topology_manager.go:215] "Topology Admit Handler" podUID="3809dd66-ffcd-4834-a237-f7d845ce4984" podNamespace="kube-system" podName="coredns-5dd5756b68-d9fmw" Jul 2 08:58:36.669795 kubelet[3640]: I0702 08:58:36.669742 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzj9s\" (UniqueName: \"kubernetes.io/projected/0a48d9a9-af65-44c5-a924-37f85c1c6d43-kube-api-access-pzj9s\") pod \"coredns-5dd5756b68-h8wml\" (UID: \"0a48d9a9-af65-44c5-a924-37f85c1c6d43\") " pod="kube-system/coredns-5dd5756b68-h8wml" Jul 2 08:58:36.670309 kubelet[3640]: I0702 08:58:36.670145 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzmc\" (UniqueName: \"kubernetes.io/projected/7e42c34c-834d-42ba-9014-171b25b9d834-kube-api-access-6lzmc\") pod \"calico-kube-controllers-5f9584999-qbdw8\" (UID: \"7e42c34c-834d-42ba-9014-171b25b9d834\") " pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" Jul 2 08:58:36.670309 kubelet[3640]: I0702 08:58:36.670234 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a48d9a9-af65-44c5-a924-37f85c1c6d43-config-volume\") pod \"coredns-5dd5756b68-h8wml\" (UID: \"0a48d9a9-af65-44c5-a924-37f85c1c6d43\") " pod="kube-system/coredns-5dd5756b68-h8wml" Jul 2 08:58:36.670624 kubelet[3640]: I0702 08:58:36.670458 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3809dd66-ffcd-4834-a237-f7d845ce4984-config-volume\") pod \"coredns-5dd5756b68-d9fmw\" (UID: \"3809dd66-ffcd-4834-a237-f7d845ce4984\") " pod="kube-system/coredns-5dd5756b68-d9fmw" Jul 2 08:58:36.670624 kubelet[3640]: I0702 08:58:36.670559 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7g4\" (UniqueName: \"kubernetes.io/projected/3809dd66-ffcd-4834-a237-f7d845ce4984-kube-api-access-pb7g4\") pod \"coredns-5dd5756b68-d9fmw\" (UID: \"3809dd66-ffcd-4834-a237-f7d845ce4984\") " pod="kube-system/coredns-5dd5756b68-d9fmw" Jul 2 08:58:36.670848 kubelet[3640]: I0702 08:58:36.670807 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e42c34c-834d-42ba-9014-171b25b9d834-tigera-ca-bundle\") pod \"calico-kube-controllers-5f9584999-qbdw8\" (UID: \"7e42c34c-834d-42ba-9014-171b25b9d834\") " pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" Jul 2 08:58:36.900906 containerd[2129]: time="2024-07-02T08:58:36.900825718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h8wml,Uid:0a48d9a9-af65-44c5-a924-37f85c1c6d43,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:36.907837 containerd[2129]: time="2024-07-02T08:58:36.907768331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9584999-qbdw8,Uid:7e42c34c-834d-42ba-9014-171b25b9d834,Namespace:calico-system,Attempt:0,}" Jul 2 08:58:36.914159 containerd[2129]: time="2024-07-02T08:58:36.913989072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d9fmw,Uid:3809dd66-ffcd-4834-a237-f7d845ce4984,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:37.094208 containerd[2129]: time="2024-07-02T08:58:37.093065527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sxv4x,Uid:d7247e25-4e0b-429d-8736-192b57c4aae4,Namespace:calico-system,Attempt:0,}" Jul 2 08:58:37.596862 containerd[2129]: time="2024-07-02T08:58:37.596464096Z" level=info msg="shim disconnected" id=34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de namespace=k8s.io Jul 2 08:58:37.596862 containerd[2129]: time="2024-07-02T08:58:37.596567696Z" level=warning msg="cleaning up after shim disconnected" id=34a13d40bba0d0be8b390befce61a2c3b8c604bba45dfe79a147ac8e48e0d9de namespace=k8s.io Jul 2 08:58:37.596862 containerd[2129]: time="2024-07-02T08:58:37.596594745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:58:37.689811 containerd[2129]: time="2024-07-02T08:58:37.688585798Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:58:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:58:37.876148 containerd[2129]: time="2024-07-02T08:58:37.875962961Z" level=error msg="Failed to destroy network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.884596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd-shm.mount: Deactivated successfully. Jul 2 08:58:37.891375 containerd[2129]: time="2024-07-02T08:58:37.891293003Z" level=error msg="encountered an error cleaning up failed sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.891533 containerd[2129]: time="2024-07-02T08:58:37.891397863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d9fmw,Uid:3809dd66-ffcd-4834-a237-f7d845ce4984,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.892372 kubelet[3640]: E0702 08:58:37.891749 3640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.892372 kubelet[3640]: E0702 08:58:37.891875 3640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-d9fmw" Jul 2 08:58:37.892372 kubelet[3640]: E0702 08:58:37.891915 3640 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-d9fmw" Jul 2 08:58:37.893202 kubelet[3640]: E0702 08:58:37.892017 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-d9fmw_kube-system(3809dd66-ffcd-4834-a237-f7d845ce4984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-d9fmw_kube-system(3809dd66-ffcd-4834-a237-f7d845ce4984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-d9fmw" podUID="3809dd66-ffcd-4834-a237-f7d845ce4984" Jul 2 08:58:37.908884 containerd[2129]: time="2024-07-02T08:58:37.908017193Z" level=error msg="Failed to destroy network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.911444 containerd[2129]: time="2024-07-02T08:58:37.910820630Z" level=error msg="Failed to destroy network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.911576 containerd[2129]: time="2024-07-02T08:58:37.911466048Z" level=error msg="encountered an error cleaning up failed sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.911576 containerd[2129]: time="2024-07-02T08:58:37.911542827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9584999-qbdw8,Uid:7e42c34c-834d-42ba-9014-171b25b9d834,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.913483 kubelet[3640]: E0702 08:58:37.913173 3640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.913483 kubelet[3640]: E0702 08:58:37.913260 3640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" Jul 2 08:58:37.913483 kubelet[3640]: E0702 08:58:37.913300 3640 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" Jul 2 08:58:37.913748 kubelet[3640]: E0702 08:58:37.913385 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f9584999-qbdw8_calico-system(7e42c34c-834d-42ba-9014-171b25b9d834)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f9584999-qbdw8_calico-system(7e42c34c-834d-42ba-9014-171b25b9d834)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" podUID="7e42c34c-834d-42ba-9014-171b25b9d834" Jul 2 08:58:37.915830 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d-shm.mount: Deactivated successfully. Jul 2 08:58:37.918540 containerd[2129]: time="2024-07-02T08:58:37.916742361Z" level=error msg="Failed to destroy network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.919396 containerd[2129]: time="2024-07-02T08:58:37.919052436Z" level=error msg="encountered an error cleaning up failed sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.919396 containerd[2129]: time="2024-07-02T08:58:37.919143057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sxv4x,Uid:d7247e25-4e0b-429d-8736-192b57c4aae4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.922608 containerd[2129]: time="2024-07-02T08:58:37.921661183Z" level=error msg="encountered an error cleaning up failed sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.922608 containerd[2129]: time="2024-07-02T08:58:37.922435474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h8wml,Uid:0a48d9a9-af65-44c5-a924-37f85c1c6d43,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.923888 kubelet[3640]: E0702 08:58:37.922110 3640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.923888 kubelet[3640]: E0702 08:58:37.922175 3640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:37.923888 kubelet[3640]: E0702 08:58:37.922228 3640 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sxv4x" Jul 2 08:58:37.924142 kubelet[3640]: E0702 08:58:37.922356 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sxv4x_calico-system(d7247e25-4e0b-429d-8736-192b57c4aae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sxv4x_calico-system(d7247e25-4e0b-429d-8736-192b57c4aae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:37.926481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9-shm.mount: Deactivated successfully. Jul 2 08:58:37.928102 kubelet[3640]: E0702 08:58:37.927062 3640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:37.928102 kubelet[3640]: E0702 08:58:37.927372 3640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-h8wml" Jul 2 08:58:37.928102 kubelet[3640]: E0702 08:58:37.927438 3640 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-h8wml" Jul 2 08:58:37.928370 kubelet[3640]: E0702 08:58:37.927612 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-h8wml_kube-system(0a48d9a9-af65-44c5-a924-37f85c1c6d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-h8wml_kube-system(0a48d9a9-af65-44c5-a924-37f85c1c6d43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-h8wml" podUID="0a48d9a9-af65-44c5-a924-37f85c1c6d43" Jul 2 08:58:37.928498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3-shm.mount: Deactivated successfully. Jul 2 08:58:38.283132 containerd[2129]: time="2024-07-02T08:58:38.283038508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 08:58:38.286865 kubelet[3640]: I0702 08:58:38.286285 3640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:58:38.289805 containerd[2129]: time="2024-07-02T08:58:38.289098789Z" level=info msg="StopPodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\"" Jul 2 08:58:38.290048 containerd[2129]: time="2024-07-02T08:58:38.289461634Z" level=info msg="Ensure that sandbox 4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd in task-service has been cleanup successfully" Jul 2 08:58:38.293234 kubelet[3640]: I0702 08:58:38.293197 3640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:58:38.298239 containerd[2129]: time="2024-07-02T08:58:38.298185074Z" level=info msg="StopPodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\"" Jul 2 08:58:38.304349 containerd[2129]: time="2024-07-02T08:58:38.303987874Z" level=info msg="Ensure that sandbox c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d in task-service has been cleanup successfully" Jul 2 08:58:38.308362 kubelet[3640]: I0702 08:58:38.308027 3640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:58:38.320073 containerd[2129]: time="2024-07-02T08:58:38.320018970Z" level=info msg="StopPodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\"" Jul 2 08:58:38.324671 containerd[2129]: time="2024-07-02T08:58:38.324550238Z" level=info msg="Ensure that sandbox 805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9 in task-service has been cleanup successfully" Jul 2 08:58:38.330695 kubelet[3640]: I0702 08:58:38.329832 3640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:58:38.336188 containerd[2129]: time="2024-07-02T08:58:38.335066101Z" level=info msg="StopPodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\"" Jul 2 08:58:38.336188 containerd[2129]: time="2024-07-02T08:58:38.335393121Z" level=info msg="Ensure that sandbox b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3 in task-service has been cleanup successfully" Jul 2 08:58:38.441358 containerd[2129]: time="2024-07-02T08:58:38.441282692Z" level=error msg="StopPodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" failed" error="failed to destroy network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:38.442628 kubelet[3640]: E0702 08:58:38.442348 3640 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:58:38.442628 kubelet[3640]: E0702 08:58:38.442457 3640 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd"} Jul 2 08:58:38.442628 kubelet[3640]: E0702 08:58:38.442524 3640 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3809dd66-ffcd-4834-a237-f7d845ce4984\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:58:38.442628 kubelet[3640]: E0702 08:58:38.442576 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3809dd66-ffcd-4834-a237-f7d845ce4984\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-d9fmw" podUID="3809dd66-ffcd-4834-a237-f7d845ce4984" Jul 2 08:58:38.444262 containerd[2129]: time="2024-07-02T08:58:38.443468540Z" level=error msg="StopPodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" failed" error="failed to destroy network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:38.444403 kubelet[3640]: E0702 08:58:38.444019 3640 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:58:38.444403 kubelet[3640]: E0702 08:58:38.444083 3640 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3"} Jul 2 08:58:38.444403 kubelet[3640]: E0702 08:58:38.444153 3640 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a48d9a9-af65-44c5-a924-37f85c1c6d43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:58:38.444403 kubelet[3640]: E0702 08:58:38.444209 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a48d9a9-af65-44c5-a924-37f85c1c6d43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-h8wml" podUID="0a48d9a9-af65-44c5-a924-37f85c1c6d43" Jul 2 08:58:38.452434 containerd[2129]: time="2024-07-02T08:58:38.452354265Z" level=error msg="StopPodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" failed" error="failed to destroy network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:38.453228 kubelet[3640]: E0702 08:58:38.452953 3640 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:58:38.453228 kubelet[3640]: E0702 08:58:38.453025 3640 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9"} Jul 2 08:58:38.453228 kubelet[3640]: E0702 08:58:38.453090 3640 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7247e25-4e0b-429d-8736-192b57c4aae4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:58:38.453228 kubelet[3640]: E0702 08:58:38.453177 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7247e25-4e0b-429d-8736-192b57c4aae4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sxv4x" podUID="d7247e25-4e0b-429d-8736-192b57c4aae4" Jul 2 08:58:38.454247 containerd[2129]: time="2024-07-02T08:58:38.454070631Z" level=error msg="StopPodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" failed" error="failed to destroy network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:58:38.454764 kubelet[3640]: E0702 08:58:38.454528 3640 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:58:38.454764 kubelet[3640]: E0702 08:58:38.454599 3640 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d"} Jul 2 08:58:38.454764 kubelet[3640]: E0702 08:58:38.454662 3640 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e42c34c-834d-42ba-9014-171b25b9d834\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:58:38.454764 kubelet[3640]: E0702 08:58:38.454723 3640 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e42c34c-834d-42ba-9014-171b25b9d834\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" podUID="7e42c34c-834d-42ba-9014-171b25b9d834" Jul 2 08:58:43.450185 kubelet[3640]: I0702 08:58:43.450056 3640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:58:45.336005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826360967.mount: Deactivated successfully. Jul 2 08:58:45.395244 containerd[2129]: time="2024-07-02T08:58:45.395166044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:45.396876 containerd[2129]: time="2024-07-02T08:58:45.396770430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 08:58:45.398270 containerd[2129]: time="2024-07-02T08:58:45.398157747Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:45.404725 containerd[2129]: time="2024-07-02T08:58:45.404594993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:45.406604 containerd[2129]: time="2024-07-02T08:58:45.406345395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 7.123231753s" Jul 2 08:58:45.406604 containerd[2129]: time="2024-07-02T08:58:45.406425055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 08:58:45.440147 containerd[2129]: time="2024-07-02T08:58:45.439818976Z" level=info msg="CreateContainer within sandbox \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 08:58:45.462968 containerd[2129]: time="2024-07-02T08:58:45.462703421Z" level=info msg="CreateContainer within sandbox \"3bde90ce9d153dae859e72c02e1e0f49ada5ae05009d5b2ba3ac4c8ba68add5d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d04db0485208563fda1154ecbca179bda0b59342e2e88deb012ddc50d0feae0f\"" Jul 2 08:58:45.467918 containerd[2129]: time="2024-07-02T08:58:45.467503924Z" level=info msg="StartContainer for \"d04db0485208563fda1154ecbca179bda0b59342e2e88deb012ddc50d0feae0f\"" Jul 2 08:58:45.589265 containerd[2129]: time="2024-07-02T08:58:45.588986467Z" level=info msg="StartContainer for \"d04db0485208563fda1154ecbca179bda0b59342e2e88deb012ddc50d0feae0f\" returns successfully" Jul 2 08:58:45.724845 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 08:58:45.724971 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 08:58:48.259710 (udev-worker)[4641]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:48.268975 systemd-networkd[1689]: vxlan.calico: Link UP Jul 2 08:58:48.268989 systemd-networkd[1689]: vxlan.calico: Gained carrier Jul 2 08:58:48.328530 (udev-worker)[4642]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:48.866049 kubelet[3640]: I0702 08:58:48.865814 3640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:58:49.731816 systemd-networkd[1689]: vxlan.calico: Gained IPv6LL Jul 2 08:58:51.088407 containerd[2129]: time="2024-07-02T08:58:51.088345815Z" level=info msg="StopPodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\"" Jul 2 08:58:51.089979 containerd[2129]: time="2024-07-02T08:58:51.089091700Z" level=info msg="StopPodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\"" Jul 2 08:58:51.276304 kubelet[3640]: I0702 08:58:51.275647 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dtgzz" podStartSLOduration=6.751168387 podCreationTimestamp="2024-07-02 08:58:25 +0000 UTC" firstStartedPulling="2024-07-02 08:58:25.883248333 +0000 UTC m=+22.011850499" lastFinishedPulling="2024-07-02 08:58:45.407102278 +0000 UTC m=+41.535704444" observedRunningTime="2024-07-02 08:58:46.400180363 +0000 UTC m=+42.528782565" watchObservedRunningTime="2024-07-02 08:58:51.275022332 +0000 UTC m=+47.403624606" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.282 [INFO][4932] k8s.go 608: Cleaning up netns ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.283 [INFO][4932] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" iface="eth0" netns="/var/run/netns/cni-f022aed9-08fb-43fb-4d99-61f4a6acdedf" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.284 [INFO][4932] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" iface="eth0" netns="/var/run/netns/cni-f022aed9-08fb-43fb-4d99-61f4a6acdedf" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.285 [INFO][4932] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" iface="eth0" netns="/var/run/netns/cni-f022aed9-08fb-43fb-4d99-61f4a6acdedf" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.285 [INFO][4932] k8s.go 615: Releasing IP address(es) ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.285 [INFO][4932] utils.go 188: Calico CNI releasing IP address ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.348 [INFO][4946] ipam_plugin.go 411: Releasing address using handleID ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.348 [INFO][4946] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.349 [INFO][4946] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.365 [WARNING][4946] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.365 [INFO][4946] ipam_plugin.go 439: Releasing address using workloadID ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.368 [INFO][4946] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:51.375118 containerd[2129]: 2024-07-02 08:58:51.370 [INFO][4932] k8s.go 621: Teardown processing complete. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:58:51.375118 containerd[2129]: time="2024-07-02T08:58:51.374058734Z" level=info msg="TearDown network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" successfully" Jul 2 08:58:51.375118 containerd[2129]: time="2024-07-02T08:58:51.374109688Z" level=info msg="StopPodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" returns successfully" Jul 2 08:58:51.381467 systemd[1]: run-netns-cni\x2df022aed9\x2d08fb\x2d43fb\x2d4d99\x2d61f4a6acdedf.mount: Deactivated successfully. Jul 2 08:58:51.386651 containerd[2129]: time="2024-07-02T08:58:51.385929811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sxv4x,Uid:d7247e25-4e0b-429d-8736-192b57c4aae4,Namespace:calico-system,Attempt:1,}" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.270 [INFO][4933] k8s.go 608: Cleaning up netns ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.270 [INFO][4933] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" iface="eth0" netns="/var/run/netns/cni-0b470bc5-77bf-4af1-69c8-867364f73cc2" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.271 [INFO][4933] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" iface="eth0" netns="/var/run/netns/cni-0b470bc5-77bf-4af1-69c8-867364f73cc2" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.278 [INFO][4933] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" iface="eth0" netns="/var/run/netns/cni-0b470bc5-77bf-4af1-69c8-867364f73cc2" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.280 [INFO][4933] k8s.go 615: Releasing IP address(es) ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.281 [INFO][4933] utils.go 188: Calico CNI releasing IP address ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.362 [INFO][4945] ipam_plugin.go 411: Releasing address using handleID ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.363 [INFO][4945] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.368 [INFO][4945] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.390 [WARNING][4945] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.390 [INFO][4945] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.394 [INFO][4945] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:51.404561 containerd[2129]: 2024-07-02 08:58:51.399 [INFO][4933] k8s.go 621: Teardown processing complete. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:58:51.405434 containerd[2129]: time="2024-07-02T08:58:51.404945142Z" level=info msg="TearDown network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" successfully" Jul 2 08:58:51.405434 containerd[2129]: time="2024-07-02T08:58:51.404985662Z" level=info msg="StopPodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" returns successfully" Jul 2 08:58:51.408135 containerd[2129]: time="2024-07-02T08:58:51.408066834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h8wml,Uid:0a48d9a9-af65-44c5-a924-37f85c1c6d43,Namespace:kube-system,Attempt:1,}" Jul 2 08:58:51.424745 systemd[1]: run-netns-cni\x2d0b470bc5\x2d77bf\x2d4af1\x2d69c8\x2d867364f73cc2.mount: Deactivated successfully. Jul 2 08:58:51.702181 (udev-worker)[4995]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:51.703590 systemd-networkd[1689]: cali9b5700ce6b7: Link UP Jul 2 08:58:51.706545 systemd-networkd[1689]: cali9b5700ce6b7: Gained carrier Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.514 [INFO][4958] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0 csi-node-driver- calico-system d7247e25-4e0b-429d-8736-192b57c4aae4 733 0 2024-07-02 08:58:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-24-171 csi-node-driver-sxv4x eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9b5700ce6b7 [] []}} ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.514 [INFO][4958] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.600 [INFO][4981] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" HandleID="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.625 [INFO][4981] ipam_plugin.go 264: Auto assigning IP ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" HandleID="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003162d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-171", "pod":"csi-node-driver-sxv4x", "timestamp":"2024-07-02 08:58:51.600746147 +0000 UTC"}, Hostname:"ip-172-31-24-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.625 [INFO][4981] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.625 [INFO][4981] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.626 [INFO][4981] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-171' Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.630 [INFO][4981] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.636 [INFO][4981] ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.651 [INFO][4981] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.655 [INFO][4981] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.659 [INFO][4981] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.659 [INFO][4981] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.662 [INFO][4981] ipam.go 1685: Creating new handle: k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.668 [INFO][4981] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.676 [INFO][4981] ipam.go 1216: Successfully claimed IPs: [192.168.6.1/26] block=192.168.6.0/26 handle="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.676 [INFO][4981] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.1/26] handle="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" host="ip-172-31-24-171" Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.677 [INFO][4981] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:51.762920 containerd[2129]: 2024-07-02 08:58:51.677 [INFO][4981] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.6.1/26] IPv6=[] ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" HandleID="k8s-pod-network.d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.764117 containerd[2129]: 2024-07-02 08:58:51.682 [INFO][4958] k8s.go 386: Populated endpoint ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7247e25-4e0b-429d-8736-192b57c4aae4", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"", Pod:"csi-node-driver-sxv4x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b5700ce6b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:51.764117 containerd[2129]: 2024-07-02 08:58:51.683 [INFO][4958] k8s.go 387: Calico CNI using IPs: [192.168.6.1/32] ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.764117 containerd[2129]: 2024-07-02 08:58:51.683 [INFO][4958] dataplane_linux.go 68: Setting the host side veth name to cali9b5700ce6b7 ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.764117 containerd[2129]: 2024-07-02 08:58:51.708 [INFO][4958] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.764117 containerd[2129]: 2024-07-02 08:58:51.711 [INFO][4958] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7247e25-4e0b-429d-8736-192b57c4aae4", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce", Pod:"csi-node-driver-sxv4x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b5700ce6b7", MAC:"92:3a:a6:50:9c:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:51.764117 containerd[2129]: 2024-07-02 08:58:51.744 [INFO][4958] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce" Namespace="calico-system" Pod="csi-node-driver-sxv4x" WorkloadEndpoint="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:58:51.811625 (udev-worker)[4998]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:51.814189 systemd-networkd[1689]: cali2977353e735: Link UP Jul 2 08:58:51.815611 systemd-networkd[1689]: cali2977353e735: Gained carrier Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.539 [INFO][4968] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0 coredns-5dd5756b68- kube-system 0a48d9a9-af65-44c5-a924-37f85c1c6d43 732 0 2024-07-02 08:58:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-171 coredns-5dd5756b68-h8wml eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2977353e735 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.540 [INFO][4968] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.605 [INFO][4985] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" HandleID="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.628 [INFO][4985] ipam_plugin.go 264: Auto assigning IP ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" HandleID="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ef950), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-171", "pod":"coredns-5dd5756b68-h8wml", "timestamp":"2024-07-02 08:58:51.60498956 +0000 UTC"}, Hostname:"ip-172-31-24-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.628 [INFO][4985] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.677 [INFO][4985] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.678 [INFO][4985] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-171' Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.681 [INFO][4985] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.698 [INFO][4985] ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.725 [INFO][4985] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.733 [INFO][4985] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.752 [INFO][4985] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.753 [INFO][4985] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.766 [INFO][4985] ipam.go 1685: Creating new handle: k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53 Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.778 [INFO][4985] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.794 [INFO][4985] ipam.go 1216: Successfully claimed IPs: [192.168.6.2/26] block=192.168.6.0/26 handle="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.794 [INFO][4985] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.2/26] handle="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" host="ip-172-31-24-171" Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.795 [INFO][4985] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:51.863519 containerd[2129]: 2024-07-02 08:58:51.796 [INFO][4985] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.6.2/26] IPv6=[] ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" HandleID="k8s-pod-network.05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.871966 containerd[2129]: 2024-07-02 08:58:51.803 [INFO][4968] k8s.go 386: Populated endpoint ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a48d9a9-af65-44c5-a924-37f85c1c6d43", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"", Pod:"coredns-5dd5756b68-h8wml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2977353e735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:51.871966 containerd[2129]: 2024-07-02 08:58:51.803 [INFO][4968] k8s.go 387: Calico CNI using IPs: [192.168.6.2/32] ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.871966 containerd[2129]: 2024-07-02 08:58:51.803 [INFO][4968] dataplane_linux.go 68: Setting the host side veth name to cali2977353e735 ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.871966 containerd[2129]: 2024-07-02 08:58:51.814 [INFO][4968] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.871966 containerd[2129]: 2024-07-02 08:58:51.817 [INFO][4968] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a48d9a9-af65-44c5-a924-37f85c1c6d43", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53", Pod:"coredns-5dd5756b68-h8wml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2977353e735", MAC:"3e:6b:34:69:25:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:51.871966 containerd[2129]: 2024-07-02 08:58:51.841 [INFO][4968] k8s.go 500: Wrote updated endpoint to datastore ContainerID="05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53" Namespace="kube-system" Pod="coredns-5dd5756b68-h8wml" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:58:51.885079 containerd[2129]: time="2024-07-02T08:58:51.884835807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:51.885079 containerd[2129]: time="2024-07-02T08:58:51.884962770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:51.887041 containerd[2129]: time="2024-07-02T08:58:51.885016437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:51.887041 containerd[2129]: time="2024-07-02T08:58:51.885094332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:51.939164 containerd[2129]: time="2024-07-02T08:58:51.937905701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:51.939164 containerd[2129]: time="2024-07-02T08:58:51.938027826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:51.939164 containerd[2129]: time="2024-07-02T08:58:51.938076822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:51.939164 containerd[2129]: time="2024-07-02T08:58:51.938182907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:52.039283 containerd[2129]: time="2024-07-02T08:58:52.039224792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sxv4x,Uid:d7247e25-4e0b-429d-8736-192b57c4aae4,Namespace:calico-system,Attempt:1,} returns sandbox id \"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce\"" Jul 2 08:58:52.053507 containerd[2129]: time="2024-07-02T08:58:52.053250405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 08:58:52.082619 containerd[2129]: time="2024-07-02T08:58:52.082539835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h8wml,Uid:0a48d9a9-af65-44c5-a924-37f85c1c6d43,Namespace:kube-system,Attempt:1,} returns sandbox id \"05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53\"" Jul 2 08:58:52.093368 containerd[2129]: time="2024-07-02T08:58:52.093076601Z" level=info msg="StopPodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\"" Jul 2 08:58:52.103549 containerd[2129]: time="2024-07-02T08:58:52.103092762Z" level=info msg="CreateContainer within sandbox \"05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:58:52.129681 containerd[2129]: time="2024-07-02T08:58:52.129624486Z" level=info msg="CreateContainer within sandbox \"05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11dfd836f7ba9656ece57aa066856ac5060c370b34cccd9d4161d45585ffee58\"" Jul 2 08:58:52.135007 containerd[2129]: time="2024-07-02T08:58:52.133698734Z" level=info msg="StartContainer for \"11dfd836f7ba9656ece57aa066856ac5060c370b34cccd9d4161d45585ffee58\"" Jul 2 08:58:52.275898 containerd[2129]: time="2024-07-02T08:58:52.275431534Z" level=info msg="StartContainer for \"11dfd836f7ba9656ece57aa066856ac5060c370b34cccd9d4161d45585ffee58\" returns successfully" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.226 [INFO][5117] k8s.go 608: Cleaning up netns ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.229 [INFO][5117] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" iface="eth0" netns="/var/run/netns/cni-a2290bc5-84d2-2bb9-2878-9a4ea9086d5f" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.229 [INFO][5117] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" iface="eth0" netns="/var/run/netns/cni-a2290bc5-84d2-2bb9-2878-9a4ea9086d5f" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.230 [INFO][5117] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" iface="eth0" netns="/var/run/netns/cni-a2290bc5-84d2-2bb9-2878-9a4ea9086d5f" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.231 [INFO][5117] k8s.go 615: Releasing IP address(es) ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.232 [INFO][5117] utils.go 188: Calico CNI releasing IP address ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.315 [INFO][5147] ipam_plugin.go 411: Releasing address using handleID ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.316 [INFO][5147] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.316 [INFO][5147] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.329 [WARNING][5147] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.329 [INFO][5147] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.331 [INFO][5147] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:52.337320 containerd[2129]: 2024-07-02 08:58:52.334 [INFO][5117] k8s.go 621: Teardown processing complete. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:58:52.338710 containerd[2129]: time="2024-07-02T08:58:52.337927723Z" level=info msg="TearDown network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" successfully" Jul 2 08:58:52.338710 containerd[2129]: time="2024-07-02T08:58:52.337968663Z" level=info msg="StopPodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" returns successfully" Jul 2 08:58:52.338957 containerd[2129]: time="2024-07-02T08:58:52.338898360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9584999-qbdw8,Uid:7e42c34c-834d-42ba-9014-171b25b9d834,Namespace:calico-system,Attempt:1,}" Jul 2 08:58:52.398595 systemd[1]: run-netns-cni\x2da2290bc5\x2d84d2\x2d2bb9\x2d2878\x2d9a4ea9086d5f.mount: Deactivated successfully. Jul 2 08:58:52.476555 kubelet[3640]: I0702 08:58:52.475256 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h8wml" podStartSLOduration=36.475174012 podCreationTimestamp="2024-07-02 08:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:52.472240106 +0000 UTC m=+48.600842356" watchObservedRunningTime="2024-07-02 08:58:52.475174012 +0000 UTC m=+48.603776203" Jul 2 08:58:52.758526 systemd-networkd[1689]: calic263cd8abb0: Link UP Jul 2 08:58:52.761233 systemd-networkd[1689]: calic263cd8abb0: Gained carrier Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.469 [INFO][5168] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0 calico-kube-controllers-5f9584999- calico-system 7e42c34c-834d-42ba-9014-171b25b9d834 747 0 2024-07-02 08:58:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f9584999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-171 calico-kube-controllers-5f9584999-qbdw8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic263cd8abb0 [] []}} ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.471 [INFO][5168] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.617 [INFO][5180] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" HandleID="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.673 [INFO][5180] ipam_plugin.go 264: Auto assigning IP ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" HandleID="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000ce780), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-171", "pod":"calico-kube-controllers-5f9584999-qbdw8", "timestamp":"2024-07-02 08:58:52.615700642 +0000 UTC"}, Hostname:"ip-172-31-24-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.673 [INFO][5180] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.674 [INFO][5180] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.674 [INFO][5180] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-171' Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.688 [INFO][5180] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.710 [INFO][5180] ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.721 [INFO][5180] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.724 [INFO][5180] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.728 [INFO][5180] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.729 [INFO][5180] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.731 [INFO][5180] ipam.go 1685: Creating new handle: k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.737 [INFO][5180] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.745 [INFO][5180] ipam.go 1216: Successfully claimed IPs: [192.168.6.3/26] block=192.168.6.0/26 handle="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.745 [INFO][5180] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.3/26] handle="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" host="ip-172-31-24-171" Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.745 [INFO][5180] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:52.809323 containerd[2129]: 2024-07-02 08:58:52.745 [INFO][5180] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.6.3/26] IPv6=[] ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" HandleID="k8s-pod-network.ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.814374 containerd[2129]: 2024-07-02 08:58:52.750 [INFO][5168] k8s.go 386: Populated endpoint ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0", GenerateName:"calico-kube-controllers-5f9584999-", Namespace:"calico-system", SelfLink:"", UID:"7e42c34c-834d-42ba-9014-171b25b9d834", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9584999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"", Pod:"calico-kube-controllers-5f9584999-qbdw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic263cd8abb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:52.814374 containerd[2129]: 2024-07-02 08:58:52.751 [INFO][5168] k8s.go 387: Calico CNI using IPs: [192.168.6.3/32] ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.814374 containerd[2129]: 2024-07-02 08:58:52.751 [INFO][5168] dataplane_linux.go 68: Setting the host side veth name to calic263cd8abb0 ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.814374 containerd[2129]: 2024-07-02 08:58:52.761 [INFO][5168] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.814374 containerd[2129]: 2024-07-02 08:58:52.762 [INFO][5168] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0", GenerateName:"calico-kube-controllers-5f9584999-", Namespace:"calico-system", SelfLink:"", UID:"7e42c34c-834d-42ba-9014-171b25b9d834", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9584999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d", Pod:"calico-kube-controllers-5f9584999-qbdw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic263cd8abb0", MAC:"3e:85:53:b1:9f:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:52.814374 containerd[2129]: 2024-07-02 08:58:52.782 [INFO][5168] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d" Namespace="calico-system" Pod="calico-kube-controllers-5f9584999-qbdw8" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:58:52.852871 containerd[2129]: time="2024-07-02T08:58:52.852570619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:52.852871 containerd[2129]: time="2024-07-02T08:58:52.852657843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:52.852871 containerd[2129]: time="2024-07-02T08:58:52.852688902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:52.852871 containerd[2129]: time="2024-07-02T08:58:52.852718089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:52.959254 containerd[2129]: time="2024-07-02T08:58:52.959182675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9584999-qbdw8,Uid:7e42c34c-834d-42ba-9014-171b25b9d834,Namespace:calico-system,Attempt:1,} returns sandbox id \"ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d\"" Jul 2 08:58:53.011246 systemd[1]: Started sshd@7-172.31.24.171:22-147.75.109.163:46392.service - OpenSSH per-connection server daemon (147.75.109.163:46392). Jul 2 08:58:53.122950 systemd-networkd[1689]: cali2977353e735: Gained IPv6LL Jul 2 08:58:53.197961 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 46392 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:53.201125 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:53.210514 systemd-logind[2093]: New session 8 of user core. Jul 2 08:58:53.217288 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:58:53.315201 systemd-networkd[1689]: cali9b5700ce6b7: Gained IPv6LL Jul 2 08:58:53.598426 sshd[5243]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:53.616182 systemd[1]: sshd@7-172.31.24.171:22-147.75.109.163:46392.service: Deactivated successfully. Jul 2 08:58:53.630666 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:58:53.637494 systemd-logind[2093]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:58:53.648509 systemd-logind[2093]: Removed session 8. Jul 2 08:58:53.697326 containerd[2129]: time="2024-07-02T08:58:53.697037944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:53.698564 containerd[2129]: time="2024-07-02T08:58:53.698509519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 08:58:53.699538 containerd[2129]: time="2024-07-02T08:58:53.699443610Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:53.703181 containerd[2129]: time="2024-07-02T08:58:53.703096459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:53.705528 containerd[2129]: time="2024-07-02T08:58:53.705311470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.651811868s" Jul 2 08:58:53.705528 containerd[2129]: time="2024-07-02T08:58:53.705374718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 08:58:53.707242 containerd[2129]: time="2024-07-02T08:58:53.707152458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 08:58:53.710436 containerd[2129]: time="2024-07-02T08:58:53.710059087Z" level=info msg="CreateContainer within sandbox \"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 08:58:53.735384 containerd[2129]: time="2024-07-02T08:58:53.735299325Z" level=info msg="CreateContainer within sandbox \"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90eca6de78c04e33010fe54319278e2aeb9afc196426fe3fae9cb53f2ab5fabe\"" Jul 2 08:58:53.738267 containerd[2129]: time="2024-07-02T08:58:53.737959458Z" level=info msg="StartContainer for \"90eca6de78c04e33010fe54319278e2aeb9afc196426fe3fae9cb53f2ab5fabe\"" Jul 2 08:58:53.827095 systemd-networkd[1689]: calic263cd8abb0: Gained IPv6LL Jul 2 08:58:53.860177 containerd[2129]: time="2024-07-02T08:58:53.859927165Z" level=info msg="StartContainer for \"90eca6de78c04e33010fe54319278e2aeb9afc196426fe3fae9cb53f2ab5fabe\" returns successfully" Jul 2 08:58:54.088525 containerd[2129]: time="2024-07-02T08:58:54.088458579Z" level=info msg="StopPodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\"" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.191 [INFO][5316] k8s.go 608: Cleaning up netns ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.191 [INFO][5316] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" iface="eth0" netns="/var/run/netns/cni-a9339f4f-20f0-7636-c5b4-29950725adcd" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.192 [INFO][5316] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" iface="eth0" netns="/var/run/netns/cni-a9339f4f-20f0-7636-c5b4-29950725adcd" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.195 [INFO][5316] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" iface="eth0" netns="/var/run/netns/cni-a9339f4f-20f0-7636-c5b4-29950725adcd" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.195 [INFO][5316] k8s.go 615: Releasing IP address(es) ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.195 [INFO][5316] utils.go 188: Calico CNI releasing IP address ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.238 [INFO][5322] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.238 [INFO][5322] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.238 [INFO][5322] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.252 [WARNING][5322] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.252 [INFO][5322] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.256 [INFO][5322] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:54.261751 containerd[2129]: 2024-07-02 08:58:54.258 [INFO][5316] k8s.go 621: Teardown processing complete. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:58:54.262611 containerd[2129]: time="2024-07-02T08:58:54.262293742Z" level=info msg="TearDown network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" successfully" Jul 2 08:58:54.262611 containerd[2129]: time="2024-07-02T08:58:54.262358094Z" level=info msg="StopPodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" returns successfully" Jul 2 08:58:54.268102 containerd[2129]: time="2024-07-02T08:58:54.265927850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d9fmw,Uid:3809dd66-ffcd-4834-a237-f7d845ce4984,Namespace:kube-system,Attempt:1,}" Jul 2 08:58:54.272395 systemd[1]: run-netns-cni\x2da9339f4f\x2d20f0\x2d7636\x2dc5b4\x2d29950725adcd.mount: Deactivated successfully. Jul 2 08:58:54.522979 systemd-networkd[1689]: cali6afffad867b: Link UP Jul 2 08:58:54.532570 systemd-networkd[1689]: cali6afffad867b: Gained carrier Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.357 [INFO][5328] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0 coredns-5dd5756b68- kube-system 3809dd66-ffcd-4834-a237-f7d845ce4984 795 0 2024-07-02 08:58:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-171 coredns-5dd5756b68-d9fmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6afffad867b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.358 [INFO][5328] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.419 [INFO][5339] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" HandleID="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.445 [INFO][5339] ipam_plugin.go 264: Auto assigning IP ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" HandleID="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b42f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-171", "pod":"coredns-5dd5756b68-d9fmw", "timestamp":"2024-07-02 08:58:54.419670048 +0000 UTC"}, Hostname:"ip-172-31-24-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.446 [INFO][5339] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.446 [INFO][5339] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.446 [INFO][5339] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-171' Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.454 [INFO][5339] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.466 [INFO][5339] ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.474 [INFO][5339] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.479 [INFO][5339] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.483 [INFO][5339] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.483 [INFO][5339] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.487 [INFO][5339] ipam.go 1685: Creating new handle: k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.495 [INFO][5339] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.505 [INFO][5339] ipam.go 1216: Successfully claimed IPs: [192.168.6.4/26] block=192.168.6.0/26 handle="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.505 [INFO][5339] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.4/26] handle="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" host="ip-172-31-24-171" Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.505 [INFO][5339] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:58:54.570415 containerd[2129]: 2024-07-02 08:58:54.505 [INFO][5339] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.6.4/26] IPv6=[] ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" HandleID="k8s-pod-network.981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.571831 containerd[2129]: 2024-07-02 08:58:54.510 [INFO][5328] k8s.go 386: Populated endpoint ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3809dd66-ffcd-4834-a237-f7d845ce4984", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"", Pod:"coredns-5dd5756b68-d9fmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6afffad867b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:54.571831 containerd[2129]: 2024-07-02 08:58:54.510 [INFO][5328] k8s.go 387: Calico CNI using IPs: [192.168.6.4/32] ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.571831 containerd[2129]: 2024-07-02 08:58:54.510 [INFO][5328] dataplane_linux.go 68: Setting the host side veth name to cali6afffad867b ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.571831 containerd[2129]: 2024-07-02 08:58:54.532 [INFO][5328] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.571831 containerd[2129]: 2024-07-02 08:58:54.533 [INFO][5328] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3809dd66-ffcd-4834-a237-f7d845ce4984", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba", Pod:"coredns-5dd5756b68-d9fmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6afffad867b", MAC:"96:78:b5:51:b4:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:58:54.571831 containerd[2129]: 2024-07-02 08:58:54.560 [INFO][5328] k8s.go 500: Wrote updated endpoint to datastore ContainerID="981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba" Namespace="kube-system" Pod="coredns-5dd5756b68-d9fmw" WorkloadEndpoint="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:58:54.685812 containerd[2129]: time="2024-07-02T08:58:54.684683266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:54.685812 containerd[2129]: time="2024-07-02T08:58:54.685054048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:54.685812 containerd[2129]: time="2024-07-02T08:58:54.685331818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:54.685812 containerd[2129]: time="2024-07-02T08:58:54.685371991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:54.805190 containerd[2129]: time="2024-07-02T08:58:54.804972558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-d9fmw,Uid:3809dd66-ffcd-4834-a237-f7d845ce4984,Namespace:kube-system,Attempt:1,} returns sandbox id \"981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba\"" Jul 2 08:58:54.813756 containerd[2129]: time="2024-07-02T08:58:54.813379999Z" level=info msg="CreateContainer within sandbox \"981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:58:54.837444 containerd[2129]: time="2024-07-02T08:58:54.837319964Z" level=info msg="CreateContainer within sandbox \"981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32318c894e953fbb8d5da33e2c1cc594cc404f12e082c35a2765a13b4e53195a\"" Jul 2 08:58:54.845091 containerd[2129]: time="2024-07-02T08:58:54.842899716Z" level=info msg="StartContainer for \"32318c894e953fbb8d5da33e2c1cc594cc404f12e082c35a2765a13b4e53195a\"" Jul 2 08:58:55.011951 containerd[2129]: time="2024-07-02T08:58:55.011878464Z" level=info msg="StartContainer for \"32318c894e953fbb8d5da33e2c1cc594cc404f12e082c35a2765a13b4e53195a\" returns successfully" Jul 2 08:58:55.553661 kubelet[3640]: I0702 08:58:55.551143 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-d9fmw" podStartSLOduration=39.54897922 podCreationTimestamp="2024-07-02 08:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:55.513237802 +0000 UTC m=+51.641840064" watchObservedRunningTime="2024-07-02 08:58:55.54897922 +0000 UTC m=+51.677581494" Jul 2 08:58:56.131338 systemd-networkd[1689]: cali6afffad867b: Gained IPv6LL Jul 2 08:58:57.142767 containerd[2129]: time="2024-07-02T08:58:57.142705845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:57.146669 containerd[2129]: time="2024-07-02T08:58:57.146618840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 08:58:57.147713 containerd[2129]: time="2024-07-02T08:58:57.147671538Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:57.154847 containerd[2129]: time="2024-07-02T08:58:57.154135749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:57.157447 containerd[2129]: time="2024-07-02T08:58:57.157356875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 3.450133833s" Jul 2 08:58:57.157447 containerd[2129]: time="2024-07-02T08:58:57.157434685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 08:58:57.159716 containerd[2129]: time="2024-07-02T08:58:57.159620498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 08:58:57.257805 containerd[2129]: time="2024-07-02T08:58:57.256530565Z" level=info msg="CreateContainer within sandbox \"ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 08:58:57.295768 containerd[2129]: time="2024-07-02T08:58:57.295690668Z" level=info msg="CreateContainer within sandbox \"ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2685794ef849c2f84d862188dc48339be42058f2afe9a4ad0c4c771f03d130f5\"" Jul 2 08:58:57.299617 containerd[2129]: time="2024-07-02T08:58:57.298154515Z" level=info msg="StartContainer for \"2685794ef849c2f84d862188dc48339be42058f2afe9a4ad0c4c771f03d130f5\"" Jul 2 08:58:57.567094 containerd[2129]: time="2024-07-02T08:58:57.567022752Z" level=info msg="StartContainer for \"2685794ef849c2f84d862188dc48339be42058f2afe9a4ad0c4c771f03d130f5\" returns successfully" Jul 2 08:58:58.197831 ntpd[2080]: Listen normally on 6 vxlan.calico 192.168.6.0:123 Jul 2 08:58:58.198677 ntpd[2080]: 2 Jul 08:58:58 ntpd[2080]: Listen normally on 6 vxlan.calico 192.168.6.0:123 Jul 2 08:58:58.198677 ntpd[2080]: 2 Jul 08:58:58 ntpd[2080]: Listen normally on 7 vxlan.calico [fe80::64aa:1fff:fe51:dcd8%4]:123 Jul 2 08:58:58.198677 ntpd[2080]: 2 Jul 08:58:58 ntpd[2080]: Listen normally on 8 cali9b5700ce6b7 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 08:58:58.198677 ntpd[2080]: 2 Jul 08:58:58 ntpd[2080]: Listen normally on 9 cali2977353e735 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 08:58:58.198677 ntpd[2080]: 2 Jul 08:58:58 ntpd[2080]: Listen normally on 10 calic263cd8abb0 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 08:58:58.198677 ntpd[2080]: 2 Jul 08:58:58 ntpd[2080]: Listen normally on 11 cali6afffad867b [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 08:58:58.198001 ntpd[2080]: Listen normally on 7 vxlan.calico [fe80::64aa:1fff:fe51:dcd8%4]:123 Jul 2 08:58:58.198086 ntpd[2080]: Listen normally on 8 cali9b5700ce6b7 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 08:58:58.198153 ntpd[2080]: Listen normally on 9 cali2977353e735 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 08:58:58.198221 ntpd[2080]: Listen normally on 10 calic263cd8abb0 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 08:58:58.198293 ntpd[2080]: Listen normally on 11 cali6afffad867b [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 08:58:58.636190 systemd[1]: Started sshd@8-172.31.24.171:22-147.75.109.163:46396.service - OpenSSH per-connection server daemon (147.75.109.163:46396). Jul 2 08:58:58.854494 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 46396 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:58.862206 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:58.889920 systemd-logind[2093]: New session 9 of user core. Jul 2 08:58:58.904098 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:58:59.345368 sshd[5491]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:59.367254 systemd[1]: sshd@8-172.31.24.171:22-147.75.109.163:46396.service: Deactivated successfully. Jul 2 08:58:59.382184 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:58:59.385036 systemd-logind[2093]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:58:59.392478 systemd-logind[2093]: Removed session 9. Jul 2 08:58:59.492655 containerd[2129]: time="2024-07-02T08:58:59.491175269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:59.494979 containerd[2129]: time="2024-07-02T08:58:59.494917551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 08:58:59.500000 containerd[2129]: time="2024-07-02T08:58:59.498429438Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:59.507928 containerd[2129]: time="2024-07-02T08:58:59.507857906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:59.509665 containerd[2129]: time="2024-07-02T08:58:59.509567380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 2.34984955s" Jul 2 08:58:59.509665 containerd[2129]: time="2024-07-02T08:58:59.509640365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 08:58:59.519749 containerd[2129]: time="2024-07-02T08:58:59.519690744Z" level=info msg="CreateContainer within sandbox \"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 08:58:59.557963 containerd[2129]: time="2024-07-02T08:58:59.557881194Z" level=info msg="CreateContainer within sandbox \"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fa0db7c92e11879587715a90d056c87d121bfabb0fd51186b69f7864811e90c8\"" Jul 2 08:58:59.561270 containerd[2129]: time="2024-07-02T08:58:59.561196903Z" level=info msg="StartContainer for \"fa0db7c92e11879587715a90d056c87d121bfabb0fd51186b69f7864811e90c8\"" Jul 2 08:58:59.657444 systemd-journald[1602]: Under memory pressure, flushing caches. Jul 2 08:58:59.652462 systemd-resolved[2024]: Under memory pressure, flushing caches. Jul 2 08:58:59.652573 systemd-resolved[2024]: Flushed all caches. Jul 2 08:58:59.895189 kubelet[3640]: I0702 08:58:59.895126 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f9584999-qbdw8" podStartSLOduration=30.698532836 podCreationTimestamp="2024-07-02 08:58:25 +0000 UTC" firstStartedPulling="2024-07-02 08:58:52.961604321 +0000 UTC m=+49.090206475" lastFinishedPulling="2024-07-02 08:58:57.158135884 +0000 UTC m=+53.286738038" observedRunningTime="2024-07-02 08:58:58.656186105 +0000 UTC m=+54.784788319" watchObservedRunningTime="2024-07-02 08:58:59.895064399 +0000 UTC m=+56.023666577" Jul 2 08:58:59.933926 containerd[2129]: time="2024-07-02T08:58:59.931359954Z" level=info msg="StartContainer for \"fa0db7c92e11879587715a90d056c87d121bfabb0fd51186b69f7864811e90c8\" returns successfully" Jul 2 08:59:00.377626 kubelet[3640]: I0702 08:59:00.377296 3640 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 08:59:00.377626 kubelet[3640]: I0702 08:59:00.377356 3640 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 08:59:00.671053 kubelet[3640]: I0702 08:59:00.670820 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-sxv4x" podStartSLOduration=28.207728175 podCreationTimestamp="2024-07-02 08:58:25 +0000 UTC" firstStartedPulling="2024-07-02 08:58:52.048968933 +0000 UTC m=+48.177571099" lastFinishedPulling="2024-07-02 08:58:59.511966431 +0000 UTC m=+55.640568609" observedRunningTime="2024-07-02 08:59:00.669909566 +0000 UTC m=+56.798511768" watchObservedRunningTime="2024-07-02 08:59:00.670725685 +0000 UTC m=+56.799327864" Jul 2 08:59:04.104884 containerd[2129]: time="2024-07-02T08:59:04.104717763Z" level=info msg="StopPodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\"" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.182 [WARNING][5580] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7247e25-4e0b-429d-8736-192b57c4aae4", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce", Pod:"csi-node-driver-sxv4x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b5700ce6b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.182 [INFO][5580] k8s.go 608: Cleaning up netns ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.182 [INFO][5580] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" iface="eth0" netns="" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.183 [INFO][5580] k8s.go 615: Releasing IP address(es) ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.183 [INFO][5580] utils.go 188: Calico CNI releasing IP address ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.223 [INFO][5587] ipam_plugin.go 411: Releasing address using handleID ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.224 [INFO][5587] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.224 [INFO][5587] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.237 [WARNING][5587] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.238 [INFO][5587] ipam_plugin.go 439: Releasing address using workloadID ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.240 [INFO][5587] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:04.246069 containerd[2129]: 2024-07-02 08:59:04.243 [INFO][5580] k8s.go 621: Teardown processing complete. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.248269 containerd[2129]: time="2024-07-02T08:59:04.246115848Z" level=info msg="TearDown network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" successfully" Jul 2 08:59:04.248269 containerd[2129]: time="2024-07-02T08:59:04.246154207Z" level=info msg="StopPodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" returns successfully" Jul 2 08:59:04.248269 containerd[2129]: time="2024-07-02T08:59:04.247053228Z" level=info msg="RemovePodSandbox for \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\"" Jul 2 08:59:04.248269 containerd[2129]: time="2024-07-02T08:59:04.247100976Z" level=info msg="Forcibly stopping sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\"" Jul 2 08:59:04.381738 systemd[1]: Started sshd@9-172.31.24.171:22-147.75.109.163:51398.service - OpenSSH per-connection server daemon (147.75.109.163:51398). Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.317 [WARNING][5605] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7247e25-4e0b-429d-8736-192b57c4aae4", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"d068371a52551df56492113f94d008bc8db09e5c60d8611e6cb5c77f820b45ce", Pod:"csi-node-driver-sxv4x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b5700ce6b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.318 [INFO][5605] k8s.go 608: Cleaning up netns ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.318 [INFO][5605] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" iface="eth0" netns="" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.318 [INFO][5605] k8s.go 615: Releasing IP address(es) ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.318 [INFO][5605] utils.go 188: Calico CNI releasing IP address ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.363 [INFO][5611] ipam_plugin.go 411: Releasing address using handleID ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.363 [INFO][5611] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.363 [INFO][5611] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.380 [WARNING][5611] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.380 [INFO][5611] ipam_plugin.go 439: Releasing address using workloadID ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" HandleID="k8s-pod-network.805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Workload="ip--172--31--24--171-k8s-csi--node--driver--sxv4x-eth0" Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.392 [INFO][5611] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:04.406619 containerd[2129]: 2024-07-02 08:59:04.401 [INFO][5605] k8s.go 621: Teardown processing complete. ContainerID="805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9" Jul 2 08:59:04.407442 containerd[2129]: time="2024-07-02T08:59:04.406699005Z" level=info msg="TearDown network for sandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" successfully" Jul 2 08:59:04.415591 containerd[2129]: time="2024-07-02T08:59:04.415294136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:04.415591 containerd[2129]: time="2024-07-02T08:59:04.415400881Z" level=info msg="RemovePodSandbox \"805f75d3d22394c8eb8254ec37ab3cb68d57e942613fbeaf45f5ff354ae7aad9\" returns successfully" Jul 2 08:59:04.417627 containerd[2129]: time="2024-07-02T08:59:04.417172511Z" level=info msg="StopPodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\"" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.490 [WARNING][5631] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0", GenerateName:"calico-kube-controllers-5f9584999-", Namespace:"calico-system", SelfLink:"", UID:"7e42c34c-834d-42ba-9014-171b25b9d834", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9584999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d", Pod:"calico-kube-controllers-5f9584999-qbdw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic263cd8abb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.490 [INFO][5631] k8s.go 608: Cleaning up netns ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.490 [INFO][5631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" iface="eth0" netns="" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.490 [INFO][5631] k8s.go 615: Releasing IP address(es) ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.490 [INFO][5631] utils.go 188: Calico CNI releasing IP address ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.536 [INFO][5637] ipam_plugin.go 411: Releasing address using handleID ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.538 [INFO][5637] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.538 [INFO][5637] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.551 [WARNING][5637] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.551 [INFO][5637] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.555 [INFO][5637] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:04.561429 containerd[2129]: 2024-07-02 08:59:04.558 [INFO][5631] k8s.go 621: Teardown processing complete. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.563596 containerd[2129]: time="2024-07-02T08:59:04.562358525Z" level=info msg="TearDown network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" successfully" Jul 2 08:59:04.563596 containerd[2129]: time="2024-07-02T08:59:04.562403799Z" level=info msg="StopPodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" returns successfully" Jul 2 08:59:04.563596 containerd[2129]: time="2024-07-02T08:59:04.563122694Z" level=info msg="RemovePodSandbox for \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\"" Jul 2 08:59:04.563596 containerd[2129]: time="2024-07-02T08:59:04.563168017Z" level=info msg="Forcibly stopping sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\"" Jul 2 08:59:04.588554 sshd[5617]: Accepted publickey for core from 147.75.109.163 port 51398 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:04.592650 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:04.603720 systemd-logind[2093]: New session 10 of user core. Jul 2 08:59:04.611442 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.661 [WARNING][5656] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0", GenerateName:"calico-kube-controllers-5f9584999-", Namespace:"calico-system", SelfLink:"", UID:"7e42c34c-834d-42ba-9014-171b25b9d834", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9584999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"ead9371de909916a7ea3ccadbf0aea39773e1edb83d2eeff6a043f3cbb2c363d", Pod:"calico-kube-controllers-5f9584999-qbdw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic263cd8abb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.661 [INFO][5656] k8s.go 608: Cleaning up netns ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.662 [INFO][5656] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" iface="eth0" netns="" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.662 [INFO][5656] k8s.go 615: Releasing IP address(es) ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.662 [INFO][5656] utils.go 188: Calico CNI releasing IP address ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.703 [INFO][5664] ipam_plugin.go 411: Releasing address using handleID ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.704 [INFO][5664] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.704 [INFO][5664] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.725 [WARNING][5664] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.726 [INFO][5664] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" HandleID="k8s-pod-network.c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Workload="ip--172--31--24--171-k8s-calico--kube--controllers--5f9584999--qbdw8-eth0" Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.737 [INFO][5664] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:04.760879 containerd[2129]: 2024-07-02 08:59:04.750 [INFO][5656] k8s.go 621: Teardown processing complete. ContainerID="c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d" Jul 2 08:59:04.760879 containerd[2129]: time="2024-07-02T08:59:04.760539539Z" level=info msg="TearDown network for sandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" successfully" Jul 2 08:59:04.783412 containerd[2129]: time="2024-07-02T08:59:04.782518815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:04.783412 containerd[2129]: time="2024-07-02T08:59:04.782660066Z" level=info msg="RemovePodSandbox \"c7f2a3731f45f50089eb6ec34d91431951865ee225cb8fa417f0e08b3b354b5d\" returns successfully" Jul 2 08:59:04.784362 containerd[2129]: time="2024-07-02T08:59:04.783901090Z" level=info msg="StopPodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\"" Jul 2 08:59:04.959997 sshd[5617]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:04.970511 systemd[1]: sshd@9-172.31.24.171:22-147.75.109.163:51398.service: Deactivated successfully. Jul 2 08:59:04.981246 systemd-logind[2093]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:59:04.986733 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:59:04.999295 systemd[1]: Started sshd@10-172.31.24.171:22-147.75.109.163:51408.service - OpenSSH per-connection server daemon (147.75.109.163:51408). Jul 2 08:59:05.005838 systemd-logind[2093]: Removed session 10. Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.923 [WARNING][5694] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3809dd66-ffcd-4834-a237-f7d845ce4984", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba", Pod:"coredns-5dd5756b68-d9fmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6afffad867b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.924 [INFO][5694] k8s.go 608: Cleaning up netns ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.924 [INFO][5694] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" iface="eth0" netns="" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.924 [INFO][5694] k8s.go 615: Releasing IP address(es) ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.924 [INFO][5694] utils.go 188: Calico CNI releasing IP address ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.993 [INFO][5701] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.993 [INFO][5701] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:04.993 [INFO][5701] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:05.011 [WARNING][5701] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:05.011 [INFO][5701] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:05.013 [INFO][5701] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:05.021298 containerd[2129]: 2024-07-02 08:59:05.016 [INFO][5694] k8s.go 621: Teardown processing complete. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.022139 containerd[2129]: time="2024-07-02T08:59:05.021343369Z" level=info msg="TearDown network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" successfully" Jul 2 08:59:05.022139 containerd[2129]: time="2024-07-02T08:59:05.021381885Z" level=info msg="StopPodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" returns successfully" Jul 2 08:59:05.022248 containerd[2129]: time="2024-07-02T08:59:05.022145286Z" level=info msg="RemovePodSandbox for \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\"" Jul 2 08:59:05.022248 containerd[2129]: time="2024-07-02T08:59:05.022191533Z" level=info msg="Forcibly stopping sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\"" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.091 [WARNING][5723] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3809dd66-ffcd-4834-a237-f7d845ce4984", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"981601010a8a108168c92020fa75071fecbc107d1bdcd4e1fd009ebe617fcdba", Pod:"coredns-5dd5756b68-d9fmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6afffad867b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.091 [INFO][5723] k8s.go 608: Cleaning up netns ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.092 [INFO][5723] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" iface="eth0" netns="" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.092 [INFO][5723] k8s.go 615: Releasing IP address(es) ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.092 [INFO][5723] utils.go 188: Calico CNI releasing IP address ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.135 [INFO][5730] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.135 [INFO][5730] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.135 [INFO][5730] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.150 [WARNING][5730] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.150 [INFO][5730] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" HandleID="k8s-pod-network.4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--d9fmw-eth0" Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.153 [INFO][5730] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:05.158768 containerd[2129]: 2024-07-02 08:59:05.156 [INFO][5723] k8s.go 621: Teardown processing complete. ContainerID="4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd" Jul 2 08:59:05.158768 containerd[2129]: time="2024-07-02T08:59:05.158808265Z" level=info msg="TearDown network for sandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" successfully" Jul 2 08:59:05.171084 containerd[2129]: time="2024-07-02T08:59:05.170999661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:05.171221 containerd[2129]: time="2024-07-02T08:59:05.171165392Z" level=info msg="RemovePodSandbox \"4b17d322cadba7fc43f046d2742995453fee0fd799477344a030bede96dc8cdd\" returns successfully" Jul 2 08:59:05.174031 containerd[2129]: time="2024-07-02T08:59:05.173679280Z" level=info msg="StopPodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\"" Jul 2 08:59:05.196728 sshd[5709]: Accepted publickey for core from 147.75.109.163 port 51408 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:05.206435 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:05.225655 systemd-logind[2093]: New session 11 of user core. Jul 2 08:59:05.233596 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.288 [WARNING][5748] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a48d9a9-af65-44c5-a924-37f85c1c6d43", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53", Pod:"coredns-5dd5756b68-h8wml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2977353e735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.288 [INFO][5748] k8s.go 608: Cleaning up netns ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.288 [INFO][5748] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" iface="eth0" netns="" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.288 [INFO][5748] k8s.go 615: Releasing IP address(es) ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.289 [INFO][5748] utils.go 188: Calico CNI releasing IP address ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.330 [INFO][5756] ipam_plugin.go 411: Releasing address using handleID ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.330 [INFO][5756] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.331 [INFO][5756] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.349 [WARNING][5756] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.349 [INFO][5756] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.354 [INFO][5756] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:05.366311 containerd[2129]: 2024-07-02 08:59:05.362 [INFO][5748] k8s.go 621: Teardown processing complete. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.370202 containerd[2129]: time="2024-07-02T08:59:05.366513891Z" level=info msg="TearDown network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" successfully" Jul 2 08:59:05.370202 containerd[2129]: time="2024-07-02T08:59:05.366555756Z" level=info msg="StopPodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" returns successfully" Jul 2 08:59:05.370202 containerd[2129]: time="2024-07-02T08:59:05.367569098Z" level=info msg="RemovePodSandbox for \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\"" Jul 2 08:59:05.370202 containerd[2129]: time="2024-07-02T08:59:05.367624050Z" level=info msg="Forcibly stopping sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\"" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.452 [WARNING][5778] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0a48d9a9-af65-44c5-a924-37f85c1c6d43", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"05dbd97e3cb8ed1422d13875bec2f5ded5f9be0a6cd74ce8970293e1aa7b4e53", Pod:"coredns-5dd5756b68-h8wml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2977353e735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.452 [INFO][5778] k8s.go 608: Cleaning up netns ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.452 [INFO][5778] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" iface="eth0" netns="" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.452 [INFO][5778] k8s.go 615: Releasing IP address(es) ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.452 [INFO][5778] utils.go 188: Calico CNI releasing IP address ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.508 [INFO][5785] ipam_plugin.go 411: Releasing address using handleID ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.508 [INFO][5785] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.508 [INFO][5785] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.530 [WARNING][5785] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.531 [INFO][5785] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" HandleID="k8s-pod-network.b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Workload="ip--172--31--24--171-k8s-coredns--5dd5756b68--h8wml-eth0" Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.534 [INFO][5785] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:05.549823 containerd[2129]: 2024-07-02 08:59:05.543 [INFO][5778] k8s.go 621: Teardown processing complete. ContainerID="b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3" Jul 2 08:59:05.551909 containerd[2129]: time="2024-07-02T08:59:05.549748624Z" level=info msg="TearDown network for sandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" successfully" Jul 2 08:59:05.557396 containerd[2129]: time="2024-07-02T08:59:05.556998843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:05.557396 containerd[2129]: time="2024-07-02T08:59:05.557173578Z" level=info msg="RemovePodSandbox \"b730bd59a5e5a81acac2fadf2f051959445d5663bd27860c6e1078ec9b7cdaf3\" returns successfully" Jul 2 08:59:05.917799 sshd[5709]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:05.933028 systemd[1]: sshd@10-172.31.24.171:22-147.75.109.163:51408.service: Deactivated successfully. Jul 2 08:59:05.956157 systemd-logind[2093]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:59:05.967498 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:59:05.991676 systemd[1]: Started sshd@11-172.31.24.171:22-147.75.109.163:51422.service - OpenSSH per-connection server daemon (147.75.109.163:51422). Jul 2 08:59:05.993121 systemd-logind[2093]: Removed session 11. Jul 2 08:59:06.162863 sshd[5794]: Accepted publickey for core from 147.75.109.163 port 51422 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:06.165531 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:06.174759 systemd-logind[2093]: New session 12 of user core. Jul 2 08:59:06.184489 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:59:06.442112 sshd[5794]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:06.449187 systemd[1]: sshd@11-172.31.24.171:22-147.75.109.163:51422.service: Deactivated successfully. Jul 2 08:59:06.456178 systemd-logind[2093]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:59:06.457191 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:59:06.460669 systemd-logind[2093]: Removed session 12. Jul 2 08:59:11.476262 systemd[1]: Started sshd@12-172.31.24.171:22-147.75.109.163:51430.service - OpenSSH per-connection server daemon (147.75.109.163:51430). Jul 2 08:59:11.655107 sshd[5837]: Accepted publickey for core from 147.75.109.163 port 51430 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:11.657747 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:11.667109 systemd-logind[2093]: New session 13 of user core. Jul 2 08:59:11.674418 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:59:11.970705 sshd[5837]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:11.984116 systemd[1]: sshd@12-172.31.24.171:22-147.75.109.163:51430.service: Deactivated successfully. Jul 2 08:59:11.999035 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:59:12.003965 systemd-logind[2093]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:59:12.009304 systemd-logind[2093]: Removed session 13. Jul 2 08:59:17.004264 systemd[1]: Started sshd@13-172.31.24.171:22-147.75.109.163:49614.service - OpenSSH per-connection server daemon (147.75.109.163:49614). Jul 2 08:59:17.185736 sshd[5857]: Accepted publickey for core from 147.75.109.163 port 49614 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:17.188405 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:17.196216 systemd-logind[2093]: New session 14 of user core. Jul 2 08:59:17.202635 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:59:17.456121 sshd[5857]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:17.461108 systemd[1]: sshd@13-172.31.24.171:22-147.75.109.163:49614.service: Deactivated successfully. Jul 2 08:59:17.469257 systemd-logind[2093]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:59:17.470142 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:59:17.475159 systemd-logind[2093]: Removed session 14. Jul 2 08:59:22.488271 systemd[1]: Started sshd@14-172.31.24.171:22-147.75.109.163:59064.service - OpenSSH per-connection server daemon (147.75.109.163:59064). Jul 2 08:59:22.665253 sshd[5901]: Accepted publickey for core from 147.75.109.163 port 59064 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:22.667204 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:22.675691 systemd-logind[2093]: New session 15 of user core. Jul 2 08:59:22.682870 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:59:22.920873 sshd[5901]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:22.928451 systemd-logind[2093]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:59:22.930332 systemd[1]: sshd@14-172.31.24.171:22-147.75.109.163:59064.service: Deactivated successfully. Jul 2 08:59:22.936886 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:59:22.939890 systemd-logind[2093]: Removed session 15. Jul 2 08:59:27.962476 systemd[1]: Started sshd@15-172.31.24.171:22-147.75.109.163:59072.service - OpenSSH per-connection server daemon (147.75.109.163:59072). Jul 2 08:59:28.155386 sshd[5914]: Accepted publickey for core from 147.75.109.163 port 59072 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:28.158012 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:28.169735 systemd-logind[2093]: New session 16 of user core. Jul 2 08:59:28.178522 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:59:28.548980 sshd[5914]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:28.558003 systemd[1]: sshd@15-172.31.24.171:22-147.75.109.163:59072.service: Deactivated successfully. Jul 2 08:59:28.574542 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:59:28.575062 systemd-logind[2093]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:59:28.596333 systemd[1]: Started sshd@16-172.31.24.171:22-147.75.109.163:59080.service - OpenSSH per-connection server daemon (147.75.109.163:59080). Jul 2 08:59:28.600845 systemd-logind[2093]: Removed session 16. Jul 2 08:59:28.787323 sshd[5935]: Accepted publickey for core from 147.75.109.163 port 59080 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:28.790092 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:28.799174 systemd-logind[2093]: New session 17 of user core. Jul 2 08:59:28.805391 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:59:29.340809 sshd[5935]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:29.350754 systemd[1]: sshd@16-172.31.24.171:22-147.75.109.163:59080.service: Deactivated successfully. Jul 2 08:59:29.363823 systemd-logind[2093]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:59:29.364840 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:59:29.387121 systemd[1]: Started sshd@17-172.31.24.171:22-147.75.109.163:59096.service - OpenSSH per-connection server daemon (147.75.109.163:59096). Jul 2 08:59:29.389918 systemd-logind[2093]: Removed session 17. Jul 2 08:59:29.592149 sshd[5947]: Accepted publickey for core from 147.75.109.163 port 59096 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:29.596082 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:29.610223 systemd-logind[2093]: New session 18 of user core. Jul 2 08:59:29.623278 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:59:31.386356 sshd[5947]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:31.409732 systemd[1]: sshd@17-172.31.24.171:22-147.75.109.163:59096.service: Deactivated successfully. Jul 2 08:59:31.414260 systemd-logind[2093]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:59:31.425620 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:59:31.449502 systemd[1]: Started sshd@18-172.31.24.171:22-147.75.109.163:59106.service - OpenSSH per-connection server daemon (147.75.109.163:59106). Jul 2 08:59:31.454931 systemd-logind[2093]: Removed session 18. Jul 2 08:59:31.665735 sshd[5971]: Accepted publickey for core from 147.75.109.163 port 59106 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:31.667771 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:31.677163 systemd-logind[2093]: New session 19 of user core. Jul 2 08:59:31.682388 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:59:32.571724 sshd[5971]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:32.583411 systemd[1]: sshd@18-172.31.24.171:22-147.75.109.163:59106.service: Deactivated successfully. Jul 2 08:59:32.599655 systemd-logind[2093]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:59:32.607112 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:59:32.621969 systemd[1]: Started sshd@19-172.31.24.171:22-147.75.109.163:47148.service - OpenSSH per-connection server daemon (147.75.109.163:47148). Jul 2 08:59:32.624862 systemd-logind[2093]: Removed session 19. Jul 2 08:59:32.837722 sshd[5986]: Accepted publickey for core from 147.75.109.163 port 47148 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:32.843215 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:32.856892 systemd-logind[2093]: New session 20 of user core. Jul 2 08:59:32.865736 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:59:33.104762 sshd[5986]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:33.114311 systemd[1]: sshd@19-172.31.24.171:22-147.75.109.163:47148.service: Deactivated successfully. Jul 2 08:59:33.121753 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:59:33.123259 systemd-logind[2093]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:59:33.125324 systemd-logind[2093]: Removed session 20. Jul 2 08:59:36.956488 systemd[1]: run-containerd-runc-k8s.io-2685794ef849c2f84d862188dc48339be42058f2afe9a4ad0c4c771f03d130f5-runc.Q8Mw8d.mount: Deactivated successfully. Jul 2 08:59:38.136270 systemd[1]: Started sshd@20-172.31.24.171:22-147.75.109.163:47160.service - OpenSSH per-connection server daemon (147.75.109.163:47160). Jul 2 08:59:38.309812 sshd[6022]: Accepted publickey for core from 147.75.109.163 port 47160 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:38.312584 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:38.320924 systemd-logind[2093]: New session 21 of user core. Jul 2 08:59:38.329451 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:59:38.583096 sshd[6022]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:38.591088 systemd[1]: sshd@20-172.31.24.171:22-147.75.109.163:47160.service: Deactivated successfully. Jul 2 08:59:38.598663 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:59:38.600702 systemd-logind[2093]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:59:38.602454 systemd-logind[2093]: Removed session 21. Jul 2 08:59:43.617247 systemd[1]: Started sshd@21-172.31.24.171:22-147.75.109.163:59396.service - OpenSSH per-connection server daemon (147.75.109.163:59396). Jul 2 08:59:43.790554 sshd[6044]: Accepted publickey for core from 147.75.109.163 port 59396 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:43.793256 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:43.805166 systemd-logind[2093]: New session 22 of user core. Jul 2 08:59:43.810285 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 08:59:44.046500 sshd[6044]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:44.052395 systemd-logind[2093]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:59:44.053456 systemd[1]: sshd@21-172.31.24.171:22-147.75.109.163:59396.service: Deactivated successfully. Jul 2 08:59:44.061453 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:59:44.067537 systemd-logind[2093]: Removed session 22. Jul 2 08:59:48.216194 kubelet[3640]: I0702 08:59:48.213933 3640 topology_manager.go:215] "Topology Admit Handler" podUID="a01ffdd4-1d47-44c5-945c-3126f046e89a" podNamespace="calico-apiserver" podName="calico-apiserver-5df766bf49-xzkwm" Jul 2 08:59:48.307724 kubelet[3640]: I0702 08:59:48.306391 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jq4p\" (UniqueName: \"kubernetes.io/projected/a01ffdd4-1d47-44c5-945c-3126f046e89a-kube-api-access-8jq4p\") pod \"calico-apiserver-5df766bf49-xzkwm\" (UID: \"a01ffdd4-1d47-44c5-945c-3126f046e89a\") " pod="calico-apiserver/calico-apiserver-5df766bf49-xzkwm" Jul 2 08:59:48.308214 kubelet[3640]: I0702 08:59:48.308086 3640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a01ffdd4-1d47-44c5-945c-3126f046e89a-calico-apiserver-certs\") pod \"calico-apiserver-5df766bf49-xzkwm\" (UID: \"a01ffdd4-1d47-44c5-945c-3126f046e89a\") " pod="calico-apiserver/calico-apiserver-5df766bf49-xzkwm" Jul 2 08:59:48.411767 kubelet[3640]: E0702 08:59:48.409180 3640 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:59:48.411767 kubelet[3640]: E0702 08:59:48.409299 3640 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01ffdd4-1d47-44c5-945c-3126f046e89a-calico-apiserver-certs podName:a01ffdd4-1d47-44c5-945c-3126f046e89a nodeName:}" failed. No retries permitted until 2024-07-02 08:59:48.909266139 +0000 UTC m=+105.037868317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a01ffdd4-1d47-44c5-945c-3126f046e89a-calico-apiserver-certs") pod "calico-apiserver-5df766bf49-xzkwm" (UID: "a01ffdd4-1d47-44c5-945c-3126f046e89a") : secret "calico-apiserver-certs" not found Jul 2 08:59:48.910823 systemd[1]: run-containerd-runc-k8s.io-d04db0485208563fda1154ecbca179bda0b59342e2e88deb012ddc50d0feae0f-runc.qS5fHA.mount: Deactivated successfully. Jul 2 08:59:49.083398 systemd[1]: Started sshd@22-172.31.24.171:22-147.75.109.163:59402.service - OpenSSH per-connection server daemon (147.75.109.163:59402). Jul 2 08:59:49.127700 containerd[2129]: time="2024-07-02T08:59:49.127435193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df766bf49-xzkwm,Uid:a01ffdd4-1d47-44c5-945c-3126f046e89a,Namespace:calico-apiserver,Attempt:0,}" Jul 2 08:59:49.282832 sshd[6091]: Accepted publickey for core from 147.75.109.163 port 59402 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:49.288656 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:49.304340 systemd-logind[2093]: New session 23 of user core. Jul 2 08:59:49.315305 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 08:59:49.392161 systemd-networkd[1689]: cali58a9c6fa094: Link UP Jul 2 08:59:49.392729 systemd-networkd[1689]: cali58a9c6fa094: Gained carrier Jul 2 08:59:49.398643 (udev-worker)[6113]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.224 [INFO][6093] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0 calico-apiserver-5df766bf49- calico-apiserver a01ffdd4-1d47-44c5-945c-3126f046e89a 1126 0 2024-07-02 08:59:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df766bf49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-171 calico-apiserver-5df766bf49-xzkwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58a9c6fa094 [] []}} ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.224 [INFO][6093] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.298 [INFO][6104] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" HandleID="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Workload="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.334 [INFO][6104] ipam_plugin.go 264: Auto assigning IP ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" HandleID="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Workload="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-171", "pod":"calico-apiserver-5df766bf49-xzkwm", "timestamp":"2024-07-02 08:59:49.298934025 +0000 UTC"}, Hostname:"ip-172-31-24-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.334 [INFO][6104] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.334 [INFO][6104] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.334 [INFO][6104] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-171' Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.337 [INFO][6104] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.346 [INFO][6104] ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.354 [INFO][6104] ipam.go 489: Trying affinity for 192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.357 [INFO][6104] ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.361 [INFO][6104] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.362 [INFO][6104] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.365 [INFO][6104] ipam.go 1685: Creating new handle: k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.371 [INFO][6104] ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.380 [INFO][6104] ipam.go 1216: Successfully claimed IPs: [192.168.6.5/26] block=192.168.6.0/26 handle="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.381 [INFO][6104] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.5/26] handle="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" host="ip-172-31-24-171" Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.381 [INFO][6104] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:49.427013 containerd[2129]: 2024-07-02 08:59:49.381 [INFO][6104] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.6.5/26] IPv6=[] ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" HandleID="k8s-pod-network.a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Workload="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.434256 containerd[2129]: 2024-07-02 08:59:49.385 [INFO][6093] k8s.go 386: Populated endpoint ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0", GenerateName:"calico-apiserver-5df766bf49-", Namespace:"calico-apiserver", SelfLink:"", UID:"a01ffdd4-1d47-44c5-945c-3126f046e89a", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df766bf49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"", Pod:"calico-apiserver-5df766bf49-xzkwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58a9c6fa094", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:49.434256 containerd[2129]: 2024-07-02 08:59:49.385 [INFO][6093] k8s.go 387: Calico CNI using IPs: [192.168.6.5/32] ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.434256 containerd[2129]: 2024-07-02 08:59:49.385 [INFO][6093] dataplane_linux.go 68: Setting the host side veth name to cali58a9c6fa094 ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.434256 containerd[2129]: 2024-07-02 08:59:49.393 [INFO][6093] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.434256 containerd[2129]: 2024-07-02 08:59:49.393 [INFO][6093] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0", GenerateName:"calico-apiserver-5df766bf49-", Namespace:"calico-apiserver", SelfLink:"", UID:"a01ffdd4-1d47-44c5-945c-3126f046e89a", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df766bf49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-171", ContainerID:"a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf", Pod:"calico-apiserver-5df766bf49-xzkwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58a9c6fa094", MAC:"ba:f3:c4:0e:c1:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:49.434256 containerd[2129]: 2024-07-02 08:59:49.413 [INFO][6093] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf" Namespace="calico-apiserver" Pod="calico-apiserver-5df766bf49-xzkwm" WorkloadEndpoint="ip--172--31--24--171-k8s-calico--apiserver--5df766bf49--xzkwm-eth0" Jul 2 08:59:49.528510 containerd[2129]: time="2024-07-02T08:59:49.528341985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:49.528812 containerd[2129]: time="2024-07-02T08:59:49.528470713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:49.528812 containerd[2129]: time="2024-07-02T08:59:49.528530803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:49.528812 containerd[2129]: time="2024-07-02T08:59:49.528567482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:49.692512 containerd[2129]: time="2024-07-02T08:59:49.692358593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df766bf49-xzkwm,Uid:a01ffdd4-1d47-44c5-945c-3126f046e89a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf\"" Jul 2 08:59:49.696890 containerd[2129]: time="2024-07-02T08:59:49.696071269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 08:59:49.704268 sshd[6091]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:49.714949 systemd[1]: sshd@22-172.31.24.171:22-147.75.109.163:59402.service: Deactivated successfully. Jul 2 08:59:49.730029 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:59:49.736277 systemd-logind[2093]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:59:49.741029 systemd-logind[2093]: Removed session 23. Jul 2 08:59:50.979007 systemd-networkd[1689]: cali58a9c6fa094: Gained IPv6LL Jul 2 08:59:53.019508 containerd[2129]: time="2024-07-02T08:59:53.019397333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:53.021477 containerd[2129]: time="2024-07-02T08:59:53.021403991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 08:59:53.024442 containerd[2129]: time="2024-07-02T08:59:53.024323286Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:53.030585 containerd[2129]: time="2024-07-02T08:59:53.030492558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:53.032284 containerd[2129]: time="2024-07-02T08:59:53.032073400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.335914847s" Jul 2 08:59:53.032284 containerd[2129]: time="2024-07-02T08:59:53.032141822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 08:59:53.037854 containerd[2129]: time="2024-07-02T08:59:53.037797584Z" level=info msg="CreateContainer within sandbox \"a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 08:59:53.062912 containerd[2129]: time="2024-07-02T08:59:53.062262439Z" level=info msg="CreateContainer within sandbox \"a5ac5c031ead7fb72ddc473d14023a5caf3aad5bfa1cb3c97ef3dc0f07daedbf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4df28b495faaa23b411de4e072ae2221f81e38e3e67985507203f1158930043a\"" Jul 2 08:59:53.069266 containerd[2129]: time="2024-07-02T08:59:53.069168878Z" level=info msg="StartContainer for \"4df28b495faaa23b411de4e072ae2221f81e38e3e67985507203f1158930043a\"" Jul 2 08:59:53.198025 ntpd[2080]: Listen normally on 12 cali58a9c6fa094 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 08:59:53.199062 ntpd[2080]: 2 Jul 08:59:53 ntpd[2080]: Listen normally on 12 cali58a9c6fa094 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 08:59:53.221453 containerd[2129]: time="2024-07-02T08:59:53.220393910Z" level=info msg="StartContainer for \"4df28b495faaa23b411de4e072ae2221f81e38e3e67985507203f1158930043a\" returns successfully" Jul 2 08:59:53.865516 kubelet[3640]: I0702 08:59:53.864346 3640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5df766bf49-xzkwm" podStartSLOduration=2.526194545 podCreationTimestamp="2024-07-02 08:59:48 +0000 UTC" firstStartedPulling="2024-07-02 08:59:49.694618663 +0000 UTC m=+105.823220817" lastFinishedPulling="2024-07-02 08:59:53.032677145 +0000 UTC m=+109.161279311" observedRunningTime="2024-07-02 08:59:53.861012332 +0000 UTC m=+109.989614558" watchObservedRunningTime="2024-07-02 08:59:53.864253039 +0000 UTC m=+109.992855229" Jul 2 08:59:54.743084 systemd[1]: Started sshd@23-172.31.24.171:22-147.75.109.163:55554.service - OpenSSH per-connection server daemon (147.75.109.163:55554). Jul 2 08:59:54.944993 sshd[6233]: Accepted publickey for core from 147.75.109.163 port 55554 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:54.948920 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:54.958373 systemd-logind[2093]: New session 24 of user core. Jul 2 08:59:54.966377 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 08:59:55.270393 sshd[6233]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:55.279546 systemd[1]: sshd@23-172.31.24.171:22-147.75.109.163:55554.service: Deactivated successfully. Jul 2 08:59:55.289208 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:59:55.291545 systemd-logind[2093]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:59:55.296300 systemd-logind[2093]: Removed session 24. Jul 2 09:00:00.305545 systemd[1]: Started sshd@24-172.31.24.171:22-147.75.109.163:55558.service - OpenSSH per-connection server daemon (147.75.109.163:55558). Jul 2 09:00:00.519758 sshd[6271]: Accepted publickey for core from 147.75.109.163 port 55558 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:00.523506 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:00.538869 systemd-logind[2093]: New session 25 of user core. Jul 2 09:00:00.545404 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:00:00.827308 sshd[6271]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:00.839423 systemd-logind[2093]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:00:00.842585 systemd[1]: sshd@24-172.31.24.171:22-147.75.109.163:55558.service: Deactivated successfully. Jul 2 09:00:00.853643 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:00:00.858522 systemd-logind[2093]: Removed session 25. Jul 2 09:00:05.858237 systemd[1]: Started sshd@25-172.31.24.171:22-147.75.109.163:46674.service - OpenSSH per-connection server daemon (147.75.109.163:46674). Jul 2 09:00:06.034063 sshd[6292]: Accepted publickey for core from 147.75.109.163 port 46674 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:06.036657 sshd[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:06.045120 systemd-logind[2093]: New session 26 of user core. Jul 2 09:00:06.051347 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:00:06.305716 sshd[6292]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:06.312040 systemd[1]: sshd@25-172.31.24.171:22-147.75.109.163:46674.service: Deactivated successfully. Jul 2 09:00:06.323989 systemd-logind[2093]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:00:06.324965 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:00:06.330998 systemd-logind[2093]: Removed session 26. Jul 2 09:00:11.338227 systemd[1]: Started sshd@26-172.31.24.171:22-147.75.109.163:46680.service - OpenSSH per-connection server daemon (147.75.109.163:46680). Jul 2 09:00:11.520403 sshd[6332]: Accepted publickey for core from 147.75.109.163 port 46680 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:11.522717 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:11.534312 systemd-logind[2093]: New session 27 of user core. Jul 2 09:00:11.545273 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 09:00:11.789037 sshd[6332]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:11.794606 systemd-logind[2093]: Session 27 logged out. Waiting for processes to exit. Jul 2 09:00:11.796017 systemd[1]: sshd@26-172.31.24.171:22-147.75.109.163:46680.service: Deactivated successfully. Jul 2 09:00:11.810250 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 09:00:11.813166 systemd-logind[2093]: Removed session 27. Jul 2 09:00:25.747093 containerd[2129]: time="2024-07-02T09:00:25.747012958Z" level=info msg="shim disconnected" id=5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a namespace=k8s.io Jul 2 09:00:25.751059 containerd[2129]: time="2024-07-02T09:00:25.748859109Z" level=warning msg="cleaning up after shim disconnected" id=5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a namespace=k8s.io Jul 2 09:00:25.751059 containerd[2129]: time="2024-07-02T09:00:25.748940257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:25.754115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a-rootfs.mount: Deactivated successfully. Jul 2 09:00:25.940247 kubelet[3640]: I0702 09:00:25.939832 3640 scope.go:117] "RemoveContainer" containerID="5150f4f5cab5b635b098c4d7625d3419249dadd4fcd08d7adf81c1b3ad17288a" Jul 2 09:00:25.945744 containerd[2129]: time="2024-07-02T09:00:25.945655627Z" level=info msg="CreateContainer within sandbox \"f245498f1108d2fcfa16b7095a503630ea7b991e73d309126595f7129550d6d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 09:00:25.975194 containerd[2129]: time="2024-07-02T09:00:25.975062847Z" level=info msg="CreateContainer within sandbox \"f245498f1108d2fcfa16b7095a503630ea7b991e73d309126595f7129550d6d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a6458a7454110d2bd2ac8e7c21665603db4a2d8c46b968cea39dffc5a921a985\"" Jul 2 09:00:25.977796 containerd[2129]: time="2024-07-02T09:00:25.975722733Z" level=info msg="StartContainer for \"a6458a7454110d2bd2ac8e7c21665603db4a2d8c46b968cea39dffc5a921a985\"" Jul 2 09:00:26.113197 containerd[2129]: time="2024-07-02T09:00:26.113118162Z" level=info msg="StartContainer for \"a6458a7454110d2bd2ac8e7c21665603db4a2d8c46b968cea39dffc5a921a985\" returns successfully" Jul 2 09:00:26.277948 kubelet[3640]: E0702 09:00:26.277664 3640 controller.go:193] "Failed to update lease" err="Put \"https://172.31.24.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-171?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 09:00:26.599567 containerd[2129]: time="2024-07-02T09:00:26.599348919Z" level=info msg="shim disconnected" id=4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a namespace=k8s.io Jul 2 09:00:26.600083 containerd[2129]: time="2024-07-02T09:00:26.599751144Z" level=warning msg="cleaning up after shim disconnected" id=4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a namespace=k8s.io Jul 2 09:00:26.600083 containerd[2129]: time="2024-07-02T09:00:26.599964731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:26.756306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a-rootfs.mount: Deactivated successfully. Jul 2 09:00:26.947630 kubelet[3640]: I0702 09:00:26.946106 3640 scope.go:117] "RemoveContainer" containerID="4147881de338f1879f15129b1e38dca060f8262366241656c02004d69cd99c3a" Jul 2 09:00:26.956035 containerd[2129]: time="2024-07-02T09:00:26.955620176Z" level=info msg="CreateContainer within sandbox \"fc6075aaf8fe5cde5222c01c2622d26b13e210ff50fb77aad54ce1ffca78093a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 09:00:26.989892 containerd[2129]: time="2024-07-02T09:00:26.987131520Z" level=info msg="CreateContainer within sandbox \"fc6075aaf8fe5cde5222c01c2622d26b13e210ff50fb77aad54ce1ffca78093a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4189e258515c6406ae6284c865d1e75821511314ae442b16e5c4d49a451fcf49\"" Jul 2 09:00:26.993241 containerd[2129]: time="2024-07-02T09:00:26.993044331Z" level=info msg="StartContainer for \"4189e258515c6406ae6284c865d1e75821511314ae442b16e5c4d49a451fcf49\"" Jul 2 09:00:27.162351 containerd[2129]: time="2024-07-02T09:00:27.162029791Z" level=info msg="StartContainer for \"4189e258515c6406ae6284c865d1e75821511314ae442b16e5c4d49a451fcf49\" returns successfully" Jul 2 09:00:30.624159 containerd[2129]: time="2024-07-02T09:00:30.624026176Z" level=info msg="shim disconnected" id=9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b namespace=k8s.io Jul 2 09:00:30.624159 containerd[2129]: time="2024-07-02T09:00:30.624123749Z" level=warning msg="cleaning up after shim disconnected" id=9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b namespace=k8s.io Jul 2 09:00:30.624159 containerd[2129]: time="2024-07-02T09:00:30.624145912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:30.628854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b-rootfs.mount: Deactivated successfully. Jul 2 09:00:30.650564 containerd[2129]: time="2024-07-02T09:00:30.650490450Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:00:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 09:00:30.970340 kubelet[3640]: I0702 09:00:30.970188 3640 scope.go:117] "RemoveContainer" containerID="9b0196a82e85681824116c9b1b2d31dc8c08544e2eb55d9372cb1154c444450b" Jul 2 09:00:30.974197 containerd[2129]: time="2024-07-02T09:00:30.974099515Z" level=info msg="CreateContainer within sandbox \"7fc94343c8a6f372d35bae2573d60c8248bd053f2ccc9c9b8dd1bc62c35f3a88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 09:00:31.000051 containerd[2129]: time="2024-07-02T09:00:30.999891933Z" level=info msg="CreateContainer within sandbox \"7fc94343c8a6f372d35bae2573d60c8248bd053f2ccc9c9b8dd1bc62c35f3a88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"dccfa71798738a02e5cfc8bbd737e21bfe9e48f06264bd16ed240e7dd9a57e76\"" Jul 2 09:00:31.000948 containerd[2129]: time="2024-07-02T09:00:31.000887999Z" level=info msg="StartContainer for \"dccfa71798738a02e5cfc8bbd737e21bfe9e48f06264bd16ed240e7dd9a57e76\"" Jul 2 09:00:31.124186 containerd[2129]: time="2024-07-02T09:00:31.124108568Z" level=info msg="StartContainer for \"dccfa71798738a02e5cfc8bbd737e21bfe9e48f06264bd16ed240e7dd9a57e76\" returns successfully"