Oct 8 19:29:39.171607 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 8 19:29:39.171653 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:29:39.171678 kernel: KASLR disabled due to lack of seed Oct 8 19:29:39.171695 kernel: efi: EFI v2.7 by EDK II Oct 8 19:29:39.171710 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Oct 8 19:29:39.171726 kernel: ACPI: Early table checksum verification disabled Oct 8 19:29:39.171744 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 8 19:29:39.171760 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 8 19:29:39.171776 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 8 19:29:39.171791 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 8 19:29:39.171811 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 8 19:29:39.171827 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 8 19:29:39.171843 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 8 19:29:39.171859 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 8 19:29:39.171877 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 8 19:29:39.171898 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 8 19:29:39.171915 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 8 19:29:39.171931 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 8 19:29:39.171948 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 8 19:29:39.171964 kernel: printk: bootconsole [uart0] enabled Oct 8 19:29:39.171980 kernel: NUMA: Failed to initialise from firmware Oct 8 19:29:39.171997 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:29:39.172014 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Oct 8 19:29:39.172030 kernel: Zone ranges: Oct 8 19:29:39.172047 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:29:39.172063 kernel: DMA32 empty Oct 8 19:29:39.172083 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 8 19:29:39.172100 kernel: Movable zone start for each node Oct 8 19:29:39.172116 kernel: Early memory node ranges Oct 8 19:29:39.172133 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Oct 8 19:29:39.172150 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Oct 8 19:29:39.172166 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Oct 8 19:29:39.172182 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 8 19:29:39.172261 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 8 19:29:39.172279 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 8 19:29:39.172297 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 8 19:29:39.172313 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 8 19:29:39.172330 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:29:39.172353 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 8 19:29:39.172370 kernel: psci: probing for conduit method from ACPI. Oct 8 19:29:39.172394 kernel: psci: PSCIv1.0 detected in firmware. Oct 8 19:29:39.172411 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:29:39.172429 kernel: psci: Trusted OS migration not required Oct 8 19:29:39.172450 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:29:39.172468 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:29:39.172486 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:29:39.172504 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:29:39.172521 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:29:39.172539 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:29:39.172556 kernel: CPU features: detected: Spectre-v2 Oct 8 19:29:39.172574 kernel: CPU features: detected: Spectre-v3a Oct 8 19:29:39.172591 kernel: CPU features: detected: Spectre-BHB Oct 8 19:29:39.172609 kernel: CPU features: detected: ARM erratum 1742098 Oct 8 19:29:39.172626 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 8 19:29:39.172648 kernel: alternatives: applying boot alternatives Oct 8 19:29:39.172668 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:29:39.172687 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:29:39.172704 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:29:39.172722 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:29:39.172739 kernel: Fallback order for Node 0: 0 Oct 8 19:29:39.172757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 8 19:29:39.172774 kernel: Policy zone: Normal Oct 8 19:29:39.172792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:29:39.172809 kernel: software IO TLB: area num 2. Oct 8 19:29:39.172827 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 8 19:29:39.172850 kernel: Memory: 3820472K/4030464K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 209992K reserved, 0K cma-reserved) Oct 8 19:29:39.172867 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:29:39.172885 kernel: trace event string verifier disabled Oct 8 19:29:39.172902 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:29:39.172920 kernel: rcu: RCU event tracing is enabled. Oct 8 19:29:39.172938 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:29:39.172956 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:29:39.172974 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:29:39.172992 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:29:39.173009 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:29:39.173027 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:29:39.173048 kernel: GICv3: 96 SPIs implemented Oct 8 19:29:39.173066 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:29:39.173083 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:29:39.173100 kernel: GICv3: GICv3 features: 16 PPIs Oct 8 19:29:39.173117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 8 19:29:39.173135 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 8 19:29:39.173152 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:29:39.173170 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:29:39.173242 kernel: GICv3: using LPI property table @0x00000004000e0000 Oct 8 19:29:39.173266 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 8 19:29:39.173284 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Oct 8 19:29:39.173302 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:29:39.173326 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 8 19:29:39.173343 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 8 19:29:39.173361 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 8 19:29:39.173378 kernel: Console: colour dummy device 80x25 Oct 8 19:29:39.173396 kernel: printk: console [tty1] enabled Oct 8 19:29:39.173414 kernel: ACPI: Core revision 20230628 Oct 8 19:29:39.173432 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 8 19:29:39.173450 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:29:39.173468 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:29:39.173485 kernel: SELinux: Initializing. Oct 8 19:29:39.173508 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:29:39.173526 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:29:39.173544 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:29:39.173562 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:29:39.173580 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:29:39.173598 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:29:39.173616 kernel: Platform MSI: ITS@0x10080000 domain created Oct 8 19:29:39.173633 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 8 19:29:39.173650 kernel: Remapping and enabling EFI services. Oct 8 19:29:39.173672 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:29:39.173690 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:29:39.173708 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 8 19:29:39.173726 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Oct 8 19:29:39.173744 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 8 19:29:39.173761 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:29:39.173779 kernel: SMP: Total of 2 processors activated. Oct 8 19:29:39.173796 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:29:39.173814 kernel: CPU features: detected: 32-bit EL1 Support Oct 8 19:29:39.173836 kernel: CPU features: detected: CRC32 instructions Oct 8 19:29:39.173854 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:29:39.173883 kernel: alternatives: applying system-wide alternatives Oct 8 19:29:39.173905 kernel: devtmpfs: initialized Oct 8 19:29:39.173924 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:29:39.173942 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:29:39.173961 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:29:39.173979 kernel: SMBIOS 3.0.0 present. Oct 8 19:29:39.173998 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 8 19:29:39.174020 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:29:39.174039 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:29:39.174058 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:29:39.174076 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:29:39.174094 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:29:39.174113 kernel: audit: type=2000 audit(0.291:1): state=initialized audit_enabled=0 res=1 Oct 8 19:29:39.174131 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:29:39.174154 kernel: cpuidle: using governor menu Oct 8 19:29:39.174173 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:29:39.174214 kernel: ASID allocator initialised with 65536 entries Oct 8 19:29:39.174237 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:29:39.174256 kernel: Serial: AMBA PL011 UART driver Oct 8 19:29:39.174274 kernel: Modules: 17584 pages in range for non-PLT usage Oct 8 19:29:39.174293 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:29:39.174311 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:29:39.174330 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:29:39.174354 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:29:39.174373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:29:39.174392 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:29:39.174410 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:29:39.174429 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:29:39.174447 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:29:39.174465 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:29:39.174484 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:29:39.174502 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:29:39.174524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:29:39.174543 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:29:39.174562 kernel: ACPI: Interpreter enabled Oct 8 19:29:39.174580 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:29:39.174598 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:29:39.174617 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 8 19:29:39.174921 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:29:39.175131 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:29:39.175370 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:29:39.175577 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 8 19:29:39.175788 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 8 19:29:39.175814 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 8 19:29:39.175833 kernel: acpiphp: Slot [1] registered Oct 8 19:29:39.175852 kernel: acpiphp: Slot [2] registered Oct 8 19:29:39.175871 kernel: acpiphp: Slot [3] registered Oct 8 19:29:39.175889 kernel: acpiphp: Slot [4] registered Oct 8 19:29:39.175907 kernel: acpiphp: Slot [5] registered Oct 8 19:29:39.175932 kernel: acpiphp: Slot [6] registered Oct 8 19:29:39.175950 kernel: acpiphp: Slot [7] registered Oct 8 19:29:39.175969 kernel: acpiphp: Slot [8] registered Oct 8 19:29:39.175987 kernel: acpiphp: Slot [9] registered Oct 8 19:29:39.176006 kernel: acpiphp: Slot [10] registered Oct 8 19:29:39.176024 kernel: acpiphp: Slot [11] registered Oct 8 19:29:39.176042 kernel: acpiphp: Slot [12] registered Oct 8 19:29:39.176060 kernel: acpiphp: Slot [13] registered Oct 8 19:29:39.176079 kernel: acpiphp: Slot [14] registered Oct 8 19:29:39.176101 kernel: acpiphp: Slot [15] registered Oct 8 19:29:39.176120 kernel: acpiphp: Slot [16] registered Oct 8 19:29:39.176138 kernel: acpiphp: Slot [17] registered Oct 8 19:29:39.176156 kernel: acpiphp: Slot [18] registered Oct 8 19:29:39.176174 kernel: acpiphp: Slot [19] registered Oct 8 19:29:39.176211 kernel: acpiphp: Slot [20] registered Oct 8 19:29:39.176232 kernel: acpiphp: Slot [21] registered Oct 8 19:29:39.176250 kernel: acpiphp: Slot [22] registered Oct 8 19:29:39.176269 kernel: acpiphp: Slot [23] registered Oct 8 19:29:39.176287 kernel: acpiphp: Slot [24] registered Oct 8 19:29:39.176312 kernel: acpiphp: Slot [25] registered Oct 8 19:29:39.176330 kernel: acpiphp: Slot [26] registered Oct 8 19:29:39.176348 kernel: acpiphp: Slot [27] registered Oct 8 19:29:39.176366 kernel: acpiphp: Slot [28] registered Oct 8 19:29:39.176385 kernel: acpiphp: Slot [29] registered Oct 8 19:29:39.176403 kernel: acpiphp: Slot [30] registered Oct 8 19:29:39.176421 kernel: acpiphp: Slot [31] registered Oct 8 19:29:39.176439 kernel: PCI host bridge to bus 0000:00 Oct 8 19:29:39.176651 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 8 19:29:39.176856 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:29:39.177046 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 8 19:29:39.177257 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 8 19:29:39.177506 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 8 19:29:39.177743 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 8 19:29:39.177965 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 8 19:29:39.178212 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 8 19:29:39.178435 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 8 19:29:39.178644 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:29:39.178867 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 8 19:29:39.179077 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 8 19:29:39.179312 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 8 19:29:39.179524 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 8 19:29:39.179740 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:29:39.179948 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 8 19:29:39.180157 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 8 19:29:39.180389 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 8 19:29:39.180601 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 8 19:29:39.180811 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 8 19:29:39.181002 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 8 19:29:39.181214 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:29:39.181409 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 8 19:29:39.181435 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:29:39.181455 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:29:39.181474 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:29:39.181493 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:29:39.181511 kernel: iommu: Default domain type: Translated Oct 8 19:29:39.181530 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:29:39.181555 kernel: efivars: Registered efivars operations Oct 8 19:29:39.181573 kernel: vgaarb: loaded Oct 8 19:29:39.181591 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:29:39.181610 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:29:39.181628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:29:39.181647 kernel: pnp: PnP ACPI init Oct 8 19:29:39.181855 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 8 19:29:39.181886 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:29:39.181911 kernel: NET: Registered PF_INET protocol family Oct 8 19:29:39.181930 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:29:39.181949 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:29:39.181968 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:29:39.181987 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:29:39.182006 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:29:39.182024 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:29:39.182043 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:29:39.182062 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:29:39.182085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:29:39.182104 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:29:39.182122 kernel: kvm [1]: HYP mode not available Oct 8 19:29:39.182140 kernel: Initialise system trusted keyrings Oct 8 19:29:39.182159 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:29:39.182177 kernel: Key type asymmetric registered Oct 8 19:29:39.182220 kernel: Asymmetric key parser 'x509' registered Oct 8 19:29:39.182242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:29:39.182261 kernel: io scheduler mq-deadline registered Oct 8 19:29:39.182285 kernel: io scheduler kyber registered Oct 8 19:29:39.182304 kernel: io scheduler bfq registered Oct 8 19:29:39.182521 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 8 19:29:39.182549 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:29:39.182568 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:29:39.182587 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Oct 8 19:29:39.182605 kernel: ACPI: button: Sleep Button [SLPB] Oct 8 19:29:39.182624 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:29:39.182648 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:29:39.182856 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 8 19:29:39.182882 kernel: printk: console [ttyS0] disabled Oct 8 19:29:39.182901 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 8 19:29:39.182920 kernel: printk: console [ttyS0] enabled Oct 8 19:29:39.182938 kernel: printk: bootconsole [uart0] disabled Oct 8 19:29:39.182957 kernel: thunder_xcv, ver 1.0 Oct 8 19:29:39.182976 kernel: thunder_bgx, ver 1.0 Oct 8 19:29:39.182994 kernel: nicpf, ver 1.0 Oct 8 19:29:39.183012 kernel: nicvf, ver 1.0 Oct 8 19:29:39.183269 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:29:39.183472 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:29:38 UTC (1728415778) Oct 8 19:29:39.183498 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:29:39.183517 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 8 19:29:39.183536 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:29:39.183554 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:29:39.183573 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:29:39.183591 kernel: Segment Routing with IPv6 Oct 8 19:29:39.183616 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:29:39.183634 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:29:39.183653 kernel: Key type dns_resolver registered Oct 8 19:29:39.183671 kernel: registered taskstats version 1 Oct 8 19:29:39.183689 kernel: Loading compiled-in X.509 certificates Oct 8 19:29:39.183708 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:29:39.183726 kernel: Key type .fscrypt registered Oct 8 19:29:39.183744 kernel: Key type fscrypt-provisioning registered Oct 8 19:29:39.183762 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:29:39.183785 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:29:39.183803 kernel: ima: No architecture policies found Oct 8 19:29:39.183822 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:29:39.183840 kernel: clk: Disabling unused clocks Oct 8 19:29:39.183858 kernel: Freeing unused kernel memory: 39104K Oct 8 19:29:39.183877 kernel: Run /init as init process Oct 8 19:29:39.183895 kernel: with arguments: Oct 8 19:29:39.183913 kernel: /init Oct 8 19:29:39.183931 kernel: with environment: Oct 8 19:29:39.183953 kernel: HOME=/ Oct 8 19:29:39.183972 kernel: TERM=linux Oct 8 19:29:39.183989 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:29:39.184012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:29:39.184035 systemd[1]: Detected virtualization amazon. Oct 8 19:29:39.184055 systemd[1]: Detected architecture arm64. Oct 8 19:29:39.184074 systemd[1]: Running in initrd. Oct 8 19:29:39.184094 systemd[1]: No hostname configured, using default hostname. Oct 8 19:29:39.184118 systemd[1]: Hostname set to . Oct 8 19:29:39.184138 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:29:39.184158 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:29:39.184177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:29:39.184216 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:29:39.184240 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:29:39.184261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:29:39.184287 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:29:39.184309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:29:39.184332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:29:39.184352 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:29:39.184372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:29:39.184392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:29:39.184412 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:29:39.184437 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:29:39.184457 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:29:39.184477 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:29:39.184497 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:29:39.184517 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:29:39.184537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:29:39.184558 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:29:39.184578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:29:39.184598 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:29:39.184623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:29:39.184643 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:29:39.184663 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:29:39.184684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:29:39.184704 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:29:39.184724 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:29:39.184744 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:29:39.184764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:29:39.184788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:29:39.184809 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:29:39.184829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:29:39.184881 systemd-journald[250]: Collecting audit messages is disabled. Oct 8 19:29:39.184927 systemd-journald[250]: Journal started Oct 8 19:29:39.184964 systemd-journald[250]: Runtime Journal (/run/log/journal/ec25669cecc3e800a03a0a39efbe2cb8) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:29:39.194216 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:29:39.195050 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:29:39.197446 systemd-modules-load[251]: Inserted module 'overlay' Oct 8 19:29:39.217494 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:29:39.238184 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:29:39.240465 systemd-modules-load[251]: Inserted module 'br_netfilter' Oct 8 19:29:39.245385 kernel: Bridge firewalling registered Oct 8 19:29:39.248666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:29:39.254490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:29:39.259949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:29:39.272549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:29:39.285638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:29:39.290138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:29:39.297133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:29:39.320498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:29:39.341122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:29:39.355660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:29:39.364583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:29:39.375812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:29:39.388489 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:29:39.420218 dracut-cmdline[288]: dracut-dracut-053 Oct 8 19:29:39.427061 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:29:39.454919 systemd-resolved[280]: Positive Trust Anchors: Oct 8 19:29:39.456053 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:29:39.456118 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:29:39.594247 kernel: SCSI subsystem initialized Oct 8 19:29:39.602274 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:29:39.615312 kernel: iscsi: registered transport (tcp) Oct 8 19:29:39.637778 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:29:39.637862 kernel: QLogic iSCSI HBA Driver Oct 8 19:29:39.695231 kernel: random: crng init done Oct 8 19:29:39.695678 systemd-resolved[280]: Defaulting to hostname 'linux'. Oct 8 19:29:39.698391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:29:39.700552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:29:39.726987 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:29:39.739619 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:29:39.774009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:29:39.774096 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:29:39.775756 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:29:39.841251 kernel: raid6: neonx8 gen() 6732 MB/s Oct 8 19:29:39.858221 kernel: raid6: neonx4 gen() 6541 MB/s Oct 8 19:29:39.875221 kernel: raid6: neonx2 gen() 5431 MB/s Oct 8 19:29:39.892221 kernel: raid6: neonx1 gen() 3971 MB/s Oct 8 19:29:39.909221 kernel: raid6: int64x8 gen() 3829 MB/s Oct 8 19:29:39.926221 kernel: raid6: int64x4 gen() 3723 MB/s Oct 8 19:29:39.943223 kernel: raid6: int64x2 gen() 3609 MB/s Oct 8 19:29:39.960974 kernel: raid6: int64x1 gen() 2759 MB/s Oct 8 19:29:39.961017 kernel: raid6: using algorithm neonx8 gen() 6732 MB/s Oct 8 19:29:39.978963 kernel: raid6: .... xor() 4818 MB/s, rmw enabled Oct 8 19:29:39.979010 kernel: raid6: using neon recovery algorithm Oct 8 19:29:39.987516 kernel: xor: measuring software checksum speed Oct 8 19:29:39.987568 kernel: 8regs : 10975 MB/sec Oct 8 19:29:39.988614 kernel: 32regs : 11939 MB/sec Oct 8 19:29:39.989790 kernel: arm64_neon : 9509 MB/sec Oct 8 19:29:39.989823 kernel: xor: using function: 32regs (11939 MB/sec) Oct 8 19:29:40.076235 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:29:40.095238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:29:40.106526 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:29:40.151573 systemd-udevd[469]: Using default interface naming scheme 'v255'. Oct 8 19:29:40.161061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:29:40.178864 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:29:40.206428 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Oct 8 19:29:40.262030 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:29:40.274523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:29:40.391711 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:29:40.407930 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:29:40.450791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:29:40.457772 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:29:40.461147 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:29:40.466294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:29:40.484160 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:29:40.522302 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:29:40.592026 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:29:40.592116 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 8 19:29:40.599901 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 8 19:29:40.600652 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 8 19:29:40.615305 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0a:14:8c:2f:1b Oct 8 19:29:40.617870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:29:40.619902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:29:40.626611 (udev-worker)[541]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:29:40.630089 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:29:40.630125 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 8 19:29:40.632528 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:29:40.634723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:29:40.639075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:29:40.646306 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:29:40.655220 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 8 19:29:40.660742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:29:40.666409 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:29:40.666447 kernel: GPT:9289727 != 16777215 Oct 8 19:29:40.666472 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:29:40.666496 kernel: GPT:9289727 != 16777215 Oct 8 19:29:40.666524 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:29:40.666548 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:29:40.695795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:29:40.712516 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:29:40.758642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:29:40.792382 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (513) Oct 8 19:29:40.807075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 8 19:29:40.841859 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (540) Oct 8 19:29:40.865892 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 8 19:29:40.916095 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:29:40.932259 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 8 19:29:40.936980 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 8 19:29:40.954571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:29:40.974242 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:29:40.974693 disk-uuid[658]: Primary Header is updated. Oct 8 19:29:40.974693 disk-uuid[658]: Secondary Entries is updated. Oct 8 19:29:40.974693 disk-uuid[658]: Secondary Header is updated. Oct 8 19:29:41.998244 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:29:41.998673 disk-uuid[659]: The operation has completed successfully. Oct 8 19:29:42.178964 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:29:42.179224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:29:42.230647 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:29:42.240118 sh[1004]: Success Oct 8 19:29:42.269247 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:29:42.369263 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:29:42.377424 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:29:42.386266 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:29:42.414092 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:29:42.414182 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:29:42.414229 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:29:42.416946 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:29:42.416981 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:29:42.487227 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:29:42.517715 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:29:42.521693 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:29:42.529490 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:29:42.540800 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:29:42.567230 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:29:42.567319 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:29:42.568664 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:29:42.574235 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:29:42.591610 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:29:42.594424 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:29:42.615094 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:29:42.625639 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:29:42.716415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:29:42.729560 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:29:42.786636 systemd-networkd[1197]: lo: Link UP Oct 8 19:29:42.786660 systemd-networkd[1197]: lo: Gained carrier Oct 8 19:29:42.791563 systemd-networkd[1197]: Enumeration completed Oct 8 19:29:42.791743 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:29:42.793900 systemd[1]: Reached target network.target - Network. Oct 8 19:29:42.795906 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:29:42.795914 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:29:42.808755 systemd-networkd[1197]: eth0: Link UP Oct 8 19:29:42.808776 systemd-networkd[1197]: eth0: Gained carrier Oct 8 19:29:42.808795 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:29:42.832278 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.26.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:29:43.029147 ignition[1119]: Ignition 2.18.0 Oct 8 19:29:43.029166 ignition[1119]: Stage: fetch-offline Oct 8 19:29:43.029721 ignition[1119]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:43.029747 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:43.032328 ignition[1119]: Ignition finished successfully Oct 8 19:29:43.039093 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:29:43.052491 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:29:43.088843 ignition[1208]: Ignition 2.18.0 Oct 8 19:29:43.090815 ignition[1208]: Stage: fetch Oct 8 19:29:43.092456 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:43.092494 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:43.094183 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:43.105483 ignition[1208]: PUT result: OK Oct 8 19:29:43.108170 ignition[1208]: parsed url from cmdline: "" Oct 8 19:29:43.108262 ignition[1208]: no config URL provided Oct 8 19:29:43.108278 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:29:43.108305 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:29:43.108337 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:43.112813 ignition[1208]: PUT result: OK Oct 8 19:29:43.113172 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 8 19:29:43.115722 ignition[1208]: GET result: OK Oct 8 19:29:43.115895 ignition[1208]: parsing config with SHA512: b7c179c581359a08422733f62a6679a2489a95b4a6bfb50ec7f61536b771d24c571b5508217241a7d55e31a0eda43cbb88d59bfb82bf9bc27b5a7d99d36d1a77 Oct 8 19:29:43.127281 unknown[1208]: fetched base config from "system" Oct 8 19:29:43.127885 ignition[1208]: fetch: fetch complete Oct 8 19:29:43.127302 unknown[1208]: fetched base config from "system" Oct 8 19:29:43.127896 ignition[1208]: fetch: fetch passed Oct 8 19:29:43.127316 unknown[1208]: fetched user config from "aws" Oct 8 19:29:43.127959 ignition[1208]: Ignition finished successfully Oct 8 19:29:43.133282 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:29:43.149517 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:29:43.173896 ignition[1215]: Ignition 2.18.0 Oct 8 19:29:43.173916 ignition[1215]: Stage: kargs Oct 8 19:29:43.174550 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:43.174575 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:43.175786 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:43.179129 ignition[1215]: PUT result: OK Oct 8 19:29:43.186913 ignition[1215]: kargs: kargs passed Oct 8 19:29:43.187742 ignition[1215]: Ignition finished successfully Oct 8 19:29:43.193248 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:29:43.204507 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:29:43.229529 ignition[1222]: Ignition 2.18.0 Oct 8 19:29:43.229558 ignition[1222]: Stage: disks Oct 8 19:29:43.230448 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:43.230474 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:43.230622 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:43.232743 ignition[1222]: PUT result: OK Oct 8 19:29:43.239152 ignition[1222]: disks: disks passed Oct 8 19:29:43.239279 ignition[1222]: Ignition finished successfully Oct 8 19:29:43.246966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:29:43.251495 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:29:43.253832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:29:43.256090 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:29:43.257949 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:29:43.259887 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:29:43.277585 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:29:43.326645 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:29:43.337408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:29:43.349587 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:29:43.437314 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:29:43.438336 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:29:43.441738 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:29:43.459371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:29:43.464412 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:29:43.470399 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:29:43.473625 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:29:43.473681 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:29:43.496157 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:29:43.507488 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:29:43.515501 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Oct 8 19:29:43.522365 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:29:43.522431 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:29:43.523666 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:29:43.529221 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:29:43.532332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:29:43.894578 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:29:43.916561 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:29:43.925164 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:29:43.944373 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:29:44.262966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:29:44.272427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:29:44.282574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:29:44.300232 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:29:44.304345 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:29:44.337289 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:29:44.346431 systemd-networkd[1197]: eth0: Gained IPv6LL Oct 8 19:29:44.355985 ignition[1363]: INFO : Ignition 2.18.0 Oct 8 19:29:44.355985 ignition[1363]: INFO : Stage: mount Oct 8 19:29:44.359168 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:44.359168 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:44.363569 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:44.365924 ignition[1363]: INFO : PUT result: OK Oct 8 19:29:44.370891 ignition[1363]: INFO : mount: mount passed Oct 8 19:29:44.370891 ignition[1363]: INFO : Ignition finished successfully Oct 8 19:29:44.375770 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:29:44.385395 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:29:44.444572 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:29:44.473234 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1377) Oct 8 19:29:44.477083 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:29:44.477133 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:29:44.477159 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:29:44.482224 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:29:44.485398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:29:44.520366 ignition[1394]: INFO : Ignition 2.18.0 Oct 8 19:29:44.520366 ignition[1394]: INFO : Stage: files Oct 8 19:29:44.523565 ignition[1394]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:44.523565 ignition[1394]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:44.523565 ignition[1394]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:44.530430 ignition[1394]: INFO : PUT result: OK Oct 8 19:29:44.534989 ignition[1394]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:29:44.537638 ignition[1394]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:29:44.537638 ignition[1394]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:29:44.567926 ignition[1394]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:29:44.570764 ignition[1394]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:29:44.573604 unknown[1394]: wrote ssh authorized keys file for user: core Oct 8 19:29:44.578302 ignition[1394]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:29:44.578302 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:29:44.578302 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:29:44.675665 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:29:45.015771 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:29:45.019397 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:29:45.022506 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:29:45.022506 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:29:45.028989 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:29:45.032116 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:29:45.035345 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:29:45.038474 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:29:45.041734 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:29:45.045334 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:29:45.048865 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:29:45.052058 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:29:45.056682 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:29:45.061715 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:29:45.061715 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Oct 8 19:29:45.496893 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:29:45.869968 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:29:45.869968 ignition[1394]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:29:45.876701 ignition[1394]: INFO : files: files passed Oct 8 19:29:45.876701 ignition[1394]: INFO : Ignition finished successfully Oct 8 19:29:45.903273 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:29:45.921618 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:29:45.928763 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:29:45.936918 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:29:45.939308 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:29:45.961320 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:29:45.961320 initrd-setup-root-after-ignition[1423]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:29:45.969298 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:29:45.977292 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:29:45.980897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:29:46.002606 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:29:46.062449 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:29:46.064243 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:29:46.068042 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:29:46.070447 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:29:46.074535 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:29:46.097844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:29:46.120963 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:29:46.135473 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:29:46.160345 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:29:46.163173 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:29:46.169707 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:29:46.171855 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:29:46.172095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:29:46.190547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:29:46.192855 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:29:46.195102 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:29:46.202425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:29:46.204887 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:29:46.211027 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:29:46.213623 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:29:46.219779 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:29:46.222488 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:29:46.225554 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:29:46.227543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:29:46.227789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:29:46.248343 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:29:46.252746 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:29:46.255046 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:29:46.259161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:29:46.261758 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:29:46.262251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:29:46.268123 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:29:46.268373 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:29:46.271056 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:29:46.271353 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:29:46.291627 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:29:46.293370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:29:46.293632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:29:46.302453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:29:46.309299 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:29:46.309599 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:29:46.312092 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:29:46.312350 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:29:46.331066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:29:46.331717 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:29:46.358008 ignition[1447]: INFO : Ignition 2.18.0 Oct 8 19:29:46.358008 ignition[1447]: INFO : Stage: umount Oct 8 19:29:46.358008 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:29:46.358008 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:29:46.358008 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:29:46.369485 ignition[1447]: INFO : PUT result: OK Oct 8 19:29:46.374971 ignition[1447]: INFO : umount: umount passed Oct 8 19:29:46.374971 ignition[1447]: INFO : Ignition finished successfully Oct 8 19:29:46.381035 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:29:46.383407 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:29:46.388178 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:29:46.391972 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:29:46.392065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:29:46.396348 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:29:46.396436 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:29:46.398341 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:29:46.398415 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:29:46.400339 systemd[1]: Stopped target network.target - Network. Oct 8 19:29:46.402339 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:29:46.402423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:29:46.405334 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:29:46.406975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:29:46.409984 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:29:46.412283 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:29:46.415622 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:29:46.418943 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:29:46.419077 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:29:46.444090 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:29:46.444183 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:29:46.449584 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:29:46.449778 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:29:46.454961 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:29:46.455049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:29:46.457398 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:29:46.461258 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:29:46.464666 systemd-networkd[1197]: eth0: DHCPv6 lease lost Oct 8 19:29:46.467743 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:29:46.468047 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:29:46.478849 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:29:46.482877 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:29:46.488854 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:29:46.488948 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:29:46.503583 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:29:46.506229 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:29:46.506362 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:29:46.508814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:29:46.508909 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:29:46.511001 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:29:46.511090 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:29:46.516491 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:29:46.517052 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:29:46.520890 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:29:46.558768 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:29:46.558960 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:29:46.570988 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:29:46.574894 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:29:46.579117 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:29:46.579236 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:29:46.598405 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:29:46.598880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:29:46.605610 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:29:46.605700 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:29:46.607781 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:29:46.607847 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:29:46.610177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:29:46.610281 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:29:46.612476 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:29:46.612553 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:29:46.618134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:29:46.625283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:29:46.649006 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:29:46.655287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:29:46.655419 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:29:46.660275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:29:46.660388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:29:46.672269 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:29:46.676418 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:29:46.679349 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:29:46.690559 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:29:46.711669 systemd[1]: Switching root. Oct 8 19:29:46.752653 systemd-journald[250]: Journal stopped Oct 8 19:29:49.322673 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Oct 8 19:29:49.322812 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:29:49.322870 kernel: SELinux: policy capability open_perms=1 Oct 8 19:29:49.322901 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:29:49.322931 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:29:49.322961 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:29:49.322990 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:29:49.323020 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:29:49.323058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:29:49.323088 kernel: audit: type=1403 audit(1728415787.714:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:29:49.323128 systemd[1]: Successfully loaded SELinux policy in 56.723ms. Oct 8 19:29:49.323169 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.782ms. Oct 8 19:29:49.325248 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:29:49.325299 systemd[1]: Detected virtualization amazon. Oct 8 19:29:49.325336 systemd[1]: Detected architecture arm64. Oct 8 19:29:49.325374 systemd[1]: Detected first boot. Oct 8 19:29:49.325410 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:29:49.325441 zram_generator::config[1489]: No configuration found. Oct 8 19:29:49.325477 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:29:49.325516 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:29:49.325549 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:29:49.325584 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:29:49.325617 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:29:49.325648 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:29:49.325685 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:29:49.325727 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:29:49.325761 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:29:49.325790 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:29:49.325831 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:29:49.325865 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:29:49.325896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:29:49.325927 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:29:49.325957 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:29:49.325990 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:29:49.326023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:29:49.326055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:29:49.326108 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:29:49.326147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:29:49.326180 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:29:49.328288 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:29:49.328332 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:29:49.328365 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:29:49.328399 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:29:49.328431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:29:49.328469 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:29:49.328502 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:29:49.328531 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:29:49.328562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:29:49.328592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:29:49.328621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:29:49.328650 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:29:49.328681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:29:49.328712 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:29:49.328741 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:29:49.328776 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:29:49.328809 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:29:49.328838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:29:49.328867 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:29:49.328897 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:29:49.328927 systemd[1]: Reached target machines.target - Containers. Oct 8 19:29:49.328956 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:29:49.328988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:29:49.329023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:29:49.329053 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:29:49.329084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:29:49.329115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:29:49.329147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:29:49.329178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:29:49.329261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:29:49.329298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:29:49.329334 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:29:49.329364 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:29:49.329395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:29:49.329428 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:29:49.329459 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:29:49.329488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:29:49.329518 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:29:49.329548 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:29:49.329578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:29:49.329629 kernel: loop: module loaded Oct 8 19:29:49.329660 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:29:49.329691 systemd[1]: Stopped verity-setup.service. Oct 8 19:29:49.329723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:29:49.329753 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:29:49.329784 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:29:49.329818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:29:49.329851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:29:49.329886 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:29:49.329923 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:29:49.329954 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:29:49.329987 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:29:49.330020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:29:49.330056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:29:49.330118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:29:49.330161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:29:49.332243 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:29:49.332362 systemd-journald[1566]: Collecting audit messages is disabled. Oct 8 19:29:49.332416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:29:49.332449 systemd-journald[1566]: Journal started Oct 8 19:29:49.332507 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec25669cecc3e800a03a0a39efbe2cb8) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:29:49.344422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:29:48.772217 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:29:48.831132 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 8 19:29:48.831909 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:29:49.353311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:29:49.368319 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:29:49.365406 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:29:49.368866 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:29:49.384289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:29:49.387516 kernel: fuse: init (API version 7.39) Oct 8 19:29:49.395050 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:29:49.397339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:29:49.402319 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:29:49.432085 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:29:49.444428 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:29:49.447451 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:29:49.447511 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:29:49.461668 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:29:49.474218 kernel: ACPI: bus type drm_connector registered Oct 8 19:29:49.474894 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:29:49.484661 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:29:49.487224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:29:49.491561 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:29:49.501639 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:29:49.503915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:29:49.509603 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:29:49.516685 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:29:49.527521 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:29:49.535245 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:29:49.538940 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:29:49.541255 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:29:49.543891 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:29:49.548298 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:29:49.565495 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:29:49.593881 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec25669cecc3e800a03a0a39efbe2cb8 is 175.393ms for 904 entries. Oct 8 19:29:49.593881 systemd-journald[1566]: System Journal (/var/log/journal/ec25669cecc3e800a03a0a39efbe2cb8) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:29:49.792749 systemd-journald[1566]: Received client request to flush runtime journal. Oct 8 19:29:49.793918 kernel: loop0: detected capacity change from 0 to 113672 Oct 8 19:29:49.793990 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:29:49.795576 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:29:49.795731 kernel: loop1: detected capacity change from 0 to 194096 Oct 8 19:29:49.633513 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:29:49.636080 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:29:49.647736 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:29:49.704032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:29:49.730170 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:29:49.742805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:29:49.754719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:29:49.766706 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:29:49.799244 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:29:49.807302 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:29:49.810383 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:29:49.827697 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:29:49.843015 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Oct 8 19:29:49.843056 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Oct 8 19:29:49.861618 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:29:49.868246 kernel: loop2: detected capacity change from 0 to 51896 Oct 8 19:29:49.960497 kernel: loop3: detected capacity change from 0 to 59688 Oct 8 19:29:50.090266 kernel: loop4: detected capacity change from 0 to 113672 Oct 8 19:29:50.103303 kernel: loop5: detected capacity change from 0 to 194096 Oct 8 19:29:50.118247 kernel: loop6: detected capacity change from 0 to 51896 Oct 8 19:29:50.133302 kernel: loop7: detected capacity change from 0 to 59688 Oct 8 19:29:50.151081 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 8 19:29:50.152658 (sd-merge)[1643]: Merged extensions into '/usr'. Oct 8 19:29:50.160471 systemd[1]: Reloading requested from client PID 1616 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:29:50.160669 systemd[1]: Reloading... Oct 8 19:29:50.329259 zram_generator::config[1664]: No configuration found. Oct 8 19:29:50.653687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:29:50.767121 systemd[1]: Reloading finished in 603 ms. Oct 8 19:29:50.821176 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:29:50.833780 systemd[1]: Starting ensure-sysext.service... Oct 8 19:29:50.844561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:29:50.869954 systemd[1]: Reloading requested from client PID 1718 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:29:50.869988 systemd[1]: Reloading... Oct 8 19:29:50.924723 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:29:50.925424 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:29:50.927514 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:29:50.928567 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Oct 8 19:29:50.928703 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Oct 8 19:29:50.937505 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:29:50.937534 systemd-tmpfiles[1719]: Skipping /boot Oct 8 19:29:50.968124 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:29:50.968156 systemd-tmpfiles[1719]: Skipping /boot Oct 8 19:29:51.050249 zram_generator::config[1742]: No configuration found. Oct 8 19:29:51.199891 ldconfig[1611]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:29:51.284386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:29:51.395322 systemd[1]: Reloading finished in 524 ms. Oct 8 19:29:51.424762 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:29:51.427466 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:29:51.433123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:29:51.458552 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:29:51.470727 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:29:51.478723 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:29:51.492701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:29:51.508489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:29:51.515658 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:29:51.538531 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:29:51.543560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:29:51.546764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:29:51.552005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:29:51.558506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:29:51.560625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:29:51.566733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:29:51.567070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:29:51.572026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:29:51.612083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:29:51.614318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:29:51.614768 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:29:51.640405 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:29:51.647293 systemd[1]: Finished ensure-sysext.service. Oct 8 19:29:51.662751 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:29:51.681609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:29:51.692074 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:29:51.692446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:29:51.697580 augenrules[1829]: No rules Oct 8 19:29:51.698215 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:29:51.700302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:29:51.703118 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:29:51.705857 systemd-udevd[1811]: Using default interface naming scheme 'v255'. Oct 8 19:29:51.714370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:29:51.715304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:29:51.718572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:29:51.719684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:29:51.720448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:29:51.724033 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:29:51.757613 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:29:51.768662 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:29:51.772150 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:29:51.774669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:29:51.794671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:29:51.811623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:29:51.969442 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:29:51.999246 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1857) Oct 8 19:29:52.014363 (udev-worker)[1848]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:29:52.022644 systemd-networkd[1845]: lo: Link UP Oct 8 19:29:52.023131 systemd-networkd[1845]: lo: Gained carrier Oct 8 19:29:52.025833 systemd-networkd[1845]: Enumeration completed Oct 8 19:29:52.026235 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:29:52.031142 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:29:52.031379 systemd-networkd[1845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:29:52.034878 systemd-networkd[1845]: eth0: Link UP Oct 8 19:29:52.035438 systemd-networkd[1845]: eth0: Gained carrier Oct 8 19:29:52.035480 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:29:52.054878 systemd-resolved[1805]: Positive Trust Anchors: Oct 8 19:29:52.054918 systemd-resolved[1805]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:29:52.054981 systemd-resolved[1805]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:29:52.062439 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:29:52.071970 systemd-resolved[1805]: Defaulting to hostname 'linux'. Oct 8 19:29:52.072144 systemd-networkd[1845]: eth0: DHCPv4 address 172.31.26.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:29:52.078287 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:29:52.080510 systemd[1]: Reached target network.target - Network. Oct 8 19:29:52.082260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:29:52.100875 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:29:52.207414 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1860) Oct 8 19:29:52.284703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:29:52.441484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:29:52.447273 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:29:52.451254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:29:52.461549 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:29:52.470668 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:29:52.488225 lvm[1971]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:29:52.511407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:29:52.523151 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:29:52.525976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:29:52.528111 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:29:52.530304 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:29:52.532611 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:29:52.535217 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:29:52.537398 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:29:52.539689 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:29:52.541927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:29:52.541976 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:29:52.543641 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:29:52.546905 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:29:52.551827 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:29:52.559694 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:29:52.564830 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:29:52.567940 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:29:52.572576 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:29:52.576714 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:29:52.580305 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:29:52.580376 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:29:52.594423 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:29:52.602053 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:29:52.607709 lvm[1978]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:29:52.607506 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:29:52.620385 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:29:52.628525 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:29:52.631395 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:29:52.636695 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:29:52.647878 systemd[1]: Started ntpd.service - Network Time Service. Oct 8 19:29:52.661347 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:29:52.668643 jq[1982]: false Oct 8 19:29:52.669410 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 8 19:29:52.675492 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:29:52.689579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:29:52.701554 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:29:52.705266 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:29:52.706113 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:29:52.716062 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:29:52.722100 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:29:52.729319 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:29:52.736071 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:29:52.738313 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:29:52.795234 jq[1994]: true Oct 8 19:29:52.861144 update_engine[1991]: I1008 19:29:52.857721 1991 main.cc:92] Flatcar Update Engine starting Oct 8 19:29:52.860762 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:29:52.863317 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:29:52.881536 extend-filesystems[1983]: Found loop4 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found loop5 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found loop6 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found loop7 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p1 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p2 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p3 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found usr Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p4 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p6 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p7 Oct 8 19:29:52.881536 extend-filesystems[1983]: Found nvme0n1p9 Oct 8 19:29:52.881536 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Oct 8 19:29:52.874774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:29:52.913468 dbus-daemon[1981]: [system] SELinux support is enabled Oct 8 19:29:52.879063 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:29:52.923326 dbus-daemon[1981]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1845 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 8 19:29:52.926837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:29:52.934937 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:29:52.935036 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:29:52.938374 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:29:52.938425 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:29:52.945549 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:29:52.947697 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 19:29:52.950533 update_engine[1991]: I1008 19:29:52.950459 1991 update_check_scheduler.cc:74] Next update check in 7m29s Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:46:09 UTC 2024 (1): Starting Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: ---------------------------------------------------- Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: corporation. Support and training for ntp-4 are Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: available at https://www.nwtime.org/support Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: ---------------------------------------------------- Oct 8 19:29:52.968279 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: proto: precision = 0.108 usec (-23) Oct 8 19:29:52.960726 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:29:52.964007 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:46:09 UTC 2024 (1): Starting Oct 8 19:29:52.969576 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: basedate set to 2024-09-26 Oct 8 19:29:52.969576 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: gps base set to 2024-09-29 (week 2334) Oct 8 19:29:52.963725 (ntainerd)[2022]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:29:52.964052 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:29:52.964073 ntpd[1985]: ---------------------------------------------------- Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Listen normally on 3 eth0 172.31.26.181:123 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Listen normally on 4 lo [::1]:123 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: bind(21) AF_INET6 fe80::40a:14ff:fe8c:2f1b%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: unable to create socket on eth0 (5) for fe80::40a:14ff:fe8c:2f1b%2#123 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: failed to init interface for address fe80::40a:14ff:fe8c:2f1b%2 Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:29:52.995793 ntpd[1985]: 8 Oct 19:29:52 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:29:52.976529 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 8 19:29:53.000603 tar[1997]: linux-arm64/helm Oct 8 19:29:52.964092 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:29:52.964111 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:29:52.964130 ntpd[1985]: corporation. Support and training for ntp-4 are Oct 8 19:29:52.964148 ntpd[1985]: available at https://www.nwtime.org/support Oct 8 19:29:52.964166 ntpd[1985]: ---------------------------------------------------- Oct 8 19:29:52.966278 ntpd[1985]: proto: precision = 0.108 usec (-23) Oct 8 19:29:52.968833 ntpd[1985]: basedate set to 2024-09-26 Oct 8 19:29:52.968863 ntpd[1985]: gps base set to 2024-09-29 (week 2334) Oct 8 19:29:52.973460 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:29:52.973536 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:29:52.975436 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:29:52.975506 ntpd[1985]: Listen normally on 3 eth0 172.31.26.181:123 Oct 8 19:29:52.975575 ntpd[1985]: Listen normally on 4 lo [::1]:123 Oct 8 19:29:52.975654 ntpd[1985]: bind(21) AF_INET6 fe80::40a:14ff:fe8c:2f1b%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:29:52.975697 ntpd[1985]: unable to create socket on eth0 (5) for fe80::40a:14ff:fe8c:2f1b%2#123 Oct 8 19:29:52.975725 ntpd[1985]: failed to init interface for address fe80::40a:14ff:fe8c:2f1b%2 Oct 8 19:29:52.975779 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Oct 8 19:29:52.985719 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:29:52.985766 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:29:53.007891 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Oct 8 19:29:53.013763 jq[2013]: true Oct 8 19:29:53.022229 extend-filesystems[2034]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:29:53.034229 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 8 19:29:53.097600 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 8 19:29:53.109413 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.124 INFO Fetch failed with 404: resource not found Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.125 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.128 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.128 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.129 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.129 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.130 INFO Fetch successful Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.130 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 8 19:29:53.150910 coreos-metadata[1980]: Oct 08 19:29:53.133 INFO Fetch successful Oct 8 19:29:53.162277 extend-filesystems[2034]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 8 19:29:53.162277 extend-filesystems[2034]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:29:53.162277 extend-filesystems[2034]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 8 19:29:53.169669 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:29:53.200536 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Oct 8 19:29:53.175342 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:29:53.247139 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:29:53.253021 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:29:53.257656 bash[2061]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:29:53.265443 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:29:53.280519 systemd[1]: Starting sshkeys.service... Oct 8 19:29:53.286498 systemd-logind[1990]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:29:53.286555 systemd-logind[1990]: Watching system buttons on /dev/input/event1 (Sleep Button) Oct 8 19:29:53.288633 systemd-logind[1990]: New seat seat0. Oct 8 19:29:53.292140 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:29:53.350007 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:29:53.373101 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:29:53.480844 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 8 19:29:53.481094 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 8 19:29:53.488244 dbus-daemon[1981]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 8 19:29:53.497355 systemd-networkd[1845]: eth0: Gained IPv6LL Oct 8 19:29:53.505067 systemd[1]: Starting polkit.service - Authorization Manager... Oct 8 19:29:53.515492 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:29:53.519713 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:29:53.558317 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1849) Oct 8 19:29:53.562454 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 8 19:29:53.581965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:29:53.597832 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:29:53.687074 polkitd[2108]: Started polkitd version 121 Oct 8 19:29:53.736294 polkitd[2108]: Loading rules from directory /etc/polkit-1/rules.d Oct 8 19:29:53.736408 polkitd[2108]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 8 19:29:53.755308 polkitd[2108]: Finished loading, compiling and executing 2 rules Oct 8 19:29:53.755971 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:29:53.774390 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 8 19:29:53.776310 systemd[1]: Started polkit.service - Authorization Manager. Oct 8 19:29:53.781710 polkitd[2108]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 8 19:29:53.879225 amazon-ssm-agent[2115]: Initializing new seelog logger Oct 8 19:29:53.879225 amazon-ssm-agent[2115]: New Seelog Logger Creation Complete Oct 8 19:29:53.879225 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.879225 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.892003 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 processing appconfig overrides Oct 8 19:29:53.896529 coreos-metadata[2090]: Oct 08 19:29:53.896 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:29:53.899061 coreos-metadata[2090]: Oct 08 19:29:53.898 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 8 19:29:53.901577 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.901577 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.901755 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 processing appconfig overrides Oct 8 19:29:53.902113 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.902113 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.911281 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 processing appconfig overrides Oct 8 19:29:53.911281 amazon-ssm-agent[2115]: 2024-10-08 19:29:53 INFO Proxy environment variables: Oct 8 19:29:53.911961 coreos-metadata[2090]: Oct 08 19:29:53.911 INFO Fetch successful Oct 8 19:29:53.911961 coreos-metadata[2090]: Oct 08 19:29:53.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 19:29:53.916406 coreos-metadata[2090]: Oct 08 19:29:53.916 INFO Fetch successful Oct 8 19:29:53.920325 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.920325 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:29:53.920325 amazon-ssm-agent[2115]: 2024/10/08 19:29:53 processing appconfig overrides Oct 8 19:29:53.921935 unknown[2090]: wrote ssh authorized keys file for user: core Oct 8 19:29:53.975737 systemd-hostnamed[2029]: Hostname set to (transient) Oct 8 19:29:53.975815 systemd-resolved[1805]: System hostname changed to 'ip-172-31-26-181'. Oct 8 19:29:53.998450 update-ssh-keys[2183]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:29:53.999288 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:29:54.011772 systemd[1]: Finished sshkeys.service. Oct 8 19:29:54.013901 amazon-ssm-agent[2115]: 2024-10-08 19:29:53 INFO https_proxy: Oct 8 19:29:54.091608 locksmithd[2026]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:29:54.117136 amazon-ssm-agent[2115]: 2024-10-08 19:29:53 INFO http_proxy: Oct 8 19:29:54.158593 containerd[2022]: time="2024-10-08T19:29:54.158458953Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:29:54.220383 amazon-ssm-agent[2115]: 2024-10-08 19:29:53 INFO no_proxy: Oct 8 19:29:54.306291 containerd[2022]: time="2024-10-08T19:29:54.304235470Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:29:54.306291 containerd[2022]: time="2024-10-08T19:29:54.304385986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.310610506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.310703194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311037466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311069530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311252890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311370382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311398462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311549422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311940214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.311974834Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:29:54.312220 containerd[2022]: time="2024-10-08T19:29:54.312002518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:29:54.320311 amazon-ssm-agent[2115]: 2024-10-08 19:29:53 INFO Checking if agent identity type OnPrem can be assumed Oct 8 19:29:54.322783 containerd[2022]: time="2024-10-08T19:29:54.312184642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:29:54.322783 containerd[2022]: time="2024-10-08T19:29:54.322316914Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:29:54.322783 containerd[2022]: time="2024-10-08T19:29:54.322476058Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:29:54.322783 containerd[2022]: time="2024-10-08T19:29:54.322503082Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:29:54.331307 containerd[2022]: time="2024-10-08T19:29:54.331127362Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:29:54.331307 containerd[2022]: time="2024-10-08T19:29:54.331210498Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:29:54.331307 containerd[2022]: time="2024-10-08T19:29:54.331245838Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:29:54.331592 containerd[2022]: time="2024-10-08T19:29:54.331561570Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.331777030Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.331810498Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.331839190Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332070238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332103754Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332134654Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332170246Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332225062Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332264854Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332295250Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332325922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332356918Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332386894Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334219 containerd[2022]: time="2024-10-08T19:29:54.332414878Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.334821 containerd[2022]: time="2024-10-08T19:29:54.332441830Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:29:54.334821 containerd[2022]: time="2024-10-08T19:29:54.332625670Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:29:54.337643 containerd[2022]: time="2024-10-08T19:29:54.337596778Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340247062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340296082Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340345138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340492366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340525906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340680814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340710982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340741210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340771606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340800418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340828234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.340858846Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.341155258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.341209642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343120 containerd[2022]: time="2024-10-08T19:29:54.341243758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.343893 containerd[2022]: time="2024-10-08T19:29:54.341277718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.350828 containerd[2022]: time="2024-10-08T19:29:54.350737030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.350974 containerd[2022]: time="2024-10-08T19:29:54.350854642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.350974 containerd[2022]: time="2024-10-08T19:29:54.350902642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.350974 containerd[2022]: time="2024-10-08T19:29:54.350943106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:29:54.354133 containerd[2022]: time="2024-10-08T19:29:54.351532546Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:29:54.354133 containerd[2022]: time="2024-10-08T19:29:54.351660898Z" level=info msg="Connect containerd service" Oct 8 19:29:54.354133 containerd[2022]: time="2024-10-08T19:29:54.351737698Z" level=info msg="using legacy CRI server" Oct 8 19:29:54.354133 containerd[2022]: time="2024-10-08T19:29:54.351765094Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:29:54.354133 containerd[2022]: time="2024-10-08T19:29:54.351969478Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:29:54.366225 containerd[2022]: time="2024-10-08T19:29:54.363982570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:29:54.366225 containerd[2022]: time="2024-10-08T19:29:54.364103386Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:29:54.366225 containerd[2022]: time="2024-10-08T19:29:54.364155694Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:29:54.366449 containerd[2022]: time="2024-10-08T19:29:54.366317278Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:29:54.366497 containerd[2022]: time="2024-10-08T19:29:54.366441550Z" level=info msg="Start subscribing containerd event" Oct 8 19:29:54.366545 containerd[2022]: time="2024-10-08T19:29:54.366522070Z" level=info msg="Start recovering state" Oct 8 19:29:54.368147 containerd[2022]: time="2024-10-08T19:29:54.366663562Z" level=info msg="Start event monitor" Oct 8 19:29:54.368147 containerd[2022]: time="2024-10-08T19:29:54.366700654Z" level=info msg="Start snapshots syncer" Oct 8 19:29:54.368147 containerd[2022]: time="2024-10-08T19:29:54.366734842Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:29:54.368147 containerd[2022]: time="2024-10-08T19:29:54.366754798Z" level=info msg="Start streaming server" Oct 8 19:29:54.371123 containerd[2022]: time="2024-10-08T19:29:54.371042698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:29:54.371586 containerd[2022]: time="2024-10-08T19:29:54.371542474Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:29:54.371696 containerd[2022]: time="2024-10-08T19:29:54.371659966Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:29:54.371891 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:29:54.377169 containerd[2022]: time="2024-10-08T19:29:54.377109598Z" level=info msg="containerd successfully booted in 0.227824s" Oct 8 19:29:54.418474 amazon-ssm-agent[2115]: 2024-10-08 19:29:53 INFO Checking if agent identity type EC2 can be assumed Oct 8 19:29:54.518009 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO Agent will take identity from EC2 Oct 8 19:29:54.621030 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:29:54.718873 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:29:54.818522 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:29:54.932158 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 8 19:29:55.019024 tar[1997]: linux-arm64/LICENSE Oct 8 19:29:55.019551 tar[1997]: linux-arm64/README.md Oct 8 19:29:55.032408 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Oct 8 19:29:55.060173 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:29:55.132720 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] Starting Core Agent Oct 8 19:29:55.235206 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 8 19:29:55.333469 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [Registrar] Starting registrar module Oct 8 19:29:55.434304 amazon-ssm-agent[2115]: 2024-10-08 19:29:54 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 8 19:29:55.475457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:29:55.492704 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:29:55.788310 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:29:55.861024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:29:55.877892 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:29:55.913825 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:29:55.915919 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:29:55.931741 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:29:55.965597 ntpd[1985]: Listen normally on 6 eth0 [fe80::40a:14ff:fe8c:2f1b%2]:123 Oct 8 19:29:55.966026 ntpd[1985]: 8 Oct 19:29:55 ntpd[1985]: Listen normally on 6 eth0 [fe80::40a:14ff:fe8c:2f1b%2]:123 Oct 8 19:29:55.982183 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:29:55.993871 systemd[1]: Started sshd@0-172.31.26.181:22-139.178.68.195:49236.service - OpenSSH per-connection server daemon (139.178.68.195:49236). Oct 8 19:29:55.998371 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:29:56.013499 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:29:56.033371 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:29:56.037745 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:29:56.040262 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:29:56.042554 systemd[1]: Startup finished in 1.157s (kernel) + 8.907s (initrd) + 8.383s (userspace) = 18.448s. Oct 8 19:29:56.269959 sshd[2233]: Accepted publickey for core from 139.178.68.195 port 49236 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:56.276669 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:56.297498 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:29:56.308862 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:29:56.320296 systemd-logind[1990]: New session 1 of user core. Oct 8 19:29:56.358325 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:29:56.370967 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:29:56.388832 (systemd)[2245]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:56.418241 amazon-ssm-agent[2115]: 2024-10-08 19:29:56 INFO [EC2Identity] EC2 registration was successful. Oct 8 19:29:56.455624 amazon-ssm-agent[2115]: 2024-10-08 19:29:56 INFO [CredentialRefresher] credentialRefresher has started Oct 8 19:29:56.455624 amazon-ssm-agent[2115]: 2024-10-08 19:29:56 INFO [CredentialRefresher] Starting credentials refresher loop Oct 8 19:29:56.455624 amazon-ssm-agent[2115]: 2024-10-08 19:29:56 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 8 19:29:56.474492 kubelet[2213]: E1008 19:29:56.474431 2213 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:29:56.480438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:29:56.480758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:29:56.481303 systemd[1]: kubelet.service: Consumed 1.281s CPU time. Oct 8 19:29:56.519958 amazon-ssm-agent[2115]: 2024-10-08 19:29:56 INFO [CredentialRefresher] Next credential rotation will be in 30.983325754733333 minutes Oct 8 19:29:56.618082 systemd[2245]: Queued start job for default target default.target. Oct 8 19:29:56.627868 systemd[2245]: Created slice app.slice - User Application Slice. Oct 8 19:29:56.627930 systemd[2245]: Reached target paths.target - Paths. Oct 8 19:29:56.627963 systemd[2245]: Reached target timers.target - Timers. Oct 8 19:29:56.630420 systemd[2245]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:29:56.652021 systemd[2245]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:29:56.652446 systemd[2245]: Reached target sockets.target - Sockets. Oct 8 19:29:56.652643 systemd[2245]: Reached target basic.target - Basic System. Oct 8 19:29:56.652863 systemd[2245]: Reached target default.target - Main User Target. Oct 8 19:29:56.653105 systemd[2245]: Startup finished in 249ms. Oct 8 19:29:56.653446 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:29:56.664452 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:29:56.818700 systemd[1]: Started sshd@1-172.31.26.181:22-139.178.68.195:49244.service - OpenSSH per-connection server daemon (139.178.68.195:49244). Oct 8 19:29:57.002619 sshd[2258]: Accepted publickey for core from 139.178.68.195 port 49244 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:57.005065 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:57.013362 systemd-logind[1990]: New session 2 of user core. Oct 8 19:29:57.024450 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:29:57.152179 sshd[2258]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:57.157088 systemd[1]: sshd@1-172.31.26.181:22-139.178.68.195:49244.service: Deactivated successfully. Oct 8 19:29:57.160294 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:29:57.165254 systemd-logind[1990]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:29:57.166940 systemd-logind[1990]: Removed session 2. Oct 8 19:29:57.195524 systemd[1]: Started sshd@2-172.31.26.181:22-139.178.68.195:49250.service - OpenSSH per-connection server daemon (139.178.68.195:49250). Oct 8 19:29:57.360176 sshd[2265]: Accepted publickey for core from 139.178.68.195 port 49250 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:57.362591 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:57.371343 systemd-logind[1990]: New session 3 of user core. Oct 8 19:29:57.378465 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:29:57.483657 amazon-ssm-agent[2115]: 2024-10-08 19:29:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 8 19:29:57.498530 sshd[2265]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:57.506454 systemd[1]: sshd@2-172.31.26.181:22-139.178.68.195:49250.service: Deactivated successfully. Oct 8 19:29:57.510698 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:29:57.515643 systemd-logind[1990]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:29:57.517689 systemd-logind[1990]: Removed session 3. Oct 8 19:29:57.537957 systemd[1]: Started sshd@3-172.31.26.181:22-139.178.68.195:49262.service - OpenSSH per-connection server daemon (139.178.68.195:49262). Oct 8 19:29:57.584358 amazon-ssm-agent[2115]: 2024-10-08 19:29:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2270) started Oct 8 19:29:57.685236 amazon-ssm-agent[2115]: 2024-10-08 19:29:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 8 19:29:57.707302 sshd[2274]: Accepted publickey for core from 139.178.68.195 port 49262 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:57.710671 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:57.718537 systemd-logind[1990]: New session 4 of user core. Oct 8 19:29:57.729470 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:29:57.860568 sshd[2274]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:57.866661 systemd[1]: sshd@3-172.31.26.181:22-139.178.68.195:49262.service: Deactivated successfully. Oct 8 19:29:57.871272 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:29:57.872400 systemd-logind[1990]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:29:57.874033 systemd-logind[1990]: Removed session 4. Oct 8 19:29:57.905668 systemd[1]: Started sshd@4-172.31.26.181:22-139.178.68.195:49264.service - OpenSSH per-connection server daemon (139.178.68.195:49264). Oct 8 19:29:58.073393 sshd[2290]: Accepted publickey for core from 139.178.68.195 port 49264 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:58.076363 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:58.083311 systemd-logind[1990]: New session 5 of user core. Oct 8 19:29:58.089466 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:29:58.225585 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:29:58.226154 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:29:58.239618 sudo[2293]: pam_unix(sudo:session): session closed for user root Oct 8 19:29:58.263723 sshd[2290]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:58.270829 systemd[1]: sshd@4-172.31.26.181:22-139.178.68.195:49264.service: Deactivated successfully. Oct 8 19:29:58.274631 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:29:58.276151 systemd-logind[1990]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:29:58.278552 systemd-logind[1990]: Removed session 5. Oct 8 19:29:58.303673 systemd[1]: Started sshd@5-172.31.26.181:22-139.178.68.195:49272.service - OpenSSH per-connection server daemon (139.178.68.195:49272). Oct 8 19:29:58.469874 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 49272 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:58.472768 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:58.480341 systemd-logind[1990]: New session 6 of user core. Oct 8 19:29:58.489435 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:29:58.593915 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:29:58.594505 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:29:58.600702 sudo[2302]: pam_unix(sudo:session): session closed for user root Oct 8 19:29:58.610328 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:29:58.610838 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:29:58.636681 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:29:58.642650 auditctl[2305]: No rules Oct 8 19:29:58.641357 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:29:58.641695 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:29:58.654060 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:29:58.695829 augenrules[2323]: No rules Oct 8 19:29:58.698384 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:29:58.700846 sudo[2301]: pam_unix(sudo:session): session closed for user root Oct 8 19:29:58.724434 sshd[2298]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:58.731090 systemd[1]: sshd@5-172.31.26.181:22-139.178.68.195:49272.service: Deactivated successfully. Oct 8 19:29:58.735412 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:29:58.737040 systemd-logind[1990]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:29:58.739596 systemd-logind[1990]: Removed session 6. Oct 8 19:29:58.759716 systemd[1]: Started sshd@6-172.31.26.181:22-139.178.68.195:49282.service - OpenSSH per-connection server daemon (139.178.68.195:49282). Oct 8 19:29:58.931757 sshd[2331]: Accepted publickey for core from 139.178.68.195 port 49282 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:29:58.934240 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:58.942558 systemd-logind[1990]: New session 7 of user core. Oct 8 19:29:58.948452 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:29:59.050886 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:29:59.051474 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:29:59.268687 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:29:59.277689 (dockerd)[2344]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:29:59.660386 dockerd[2344]: time="2024-10-08T19:29:59.660299501Z" level=info msg="Starting up" Oct 8 19:29:59.693070 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3885777034-merged.mount: Deactivated successfully. Oct 8 19:30:00.233929 dockerd[2344]: time="2024-10-08T19:30:00.233451158Z" level=info msg="Loading containers: start." Oct 8 19:30:00.426462 kernel: Initializing XFRM netlink socket Oct 8 19:30:00.493356 (udev-worker)[2360]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:30:00.574652 systemd-networkd[1845]: docker0: Link UP Oct 8 19:30:00.599837 dockerd[2344]: time="2024-10-08T19:30:00.599787561Z" level=info msg="Loading containers: done." Oct 8 19:30:00.712164 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3562064576-merged.mount: Deactivated successfully. Oct 8 19:30:00.716561 dockerd[2344]: time="2024-10-08T19:30:00.716491717Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:30:00.717091 dockerd[2344]: time="2024-10-08T19:30:00.716826000Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:30:00.717091 dockerd[2344]: time="2024-10-08T19:30:00.717027929Z" level=info msg="Daemon has completed initialization" Oct 8 19:30:00.772926 dockerd[2344]: time="2024-10-08T19:30:00.772744366Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:30:00.776082 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:30:01.957306 containerd[2022]: time="2024-10-08T19:30:01.957154235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 19:30:02.679823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount612138138.mount: Deactivated successfully. Oct 8 19:30:04.470171 containerd[2022]: time="2024-10-08T19:30:04.469526403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:04.471695 containerd[2022]: time="2024-10-08T19:30:04.471613070Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=29945962" Oct 8 19:30:04.472977 containerd[2022]: time="2024-10-08T19:30:04.472889812Z" level=info msg="ImageCreate event name:\"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:04.478619 containerd[2022]: time="2024-10-08T19:30:04.478524083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:04.481091 containerd[2022]: time="2024-10-08T19:30:04.480862636Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"29942762\" in 2.523610961s" Oct 8 19:30:04.481091 containerd[2022]: time="2024-10-08T19:30:04.480920289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\"" Oct 8 19:30:04.521462 containerd[2022]: time="2024-10-08T19:30:04.521414810Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 19:30:06.310255 containerd[2022]: time="2024-10-08T19:30:06.310007532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:06.312226 containerd[2022]: time="2024-10-08T19:30:06.312142535Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=26885773" Oct 8 19:30:06.314379 containerd[2022]: time="2024-10-08T19:30:06.314324181Z" level=info msg="ImageCreate event name:\"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:06.322221 containerd[2022]: time="2024-10-08T19:30:06.320991714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:06.322600 containerd[2022]: time="2024-10-08T19:30:06.322558341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"28373587\" in 1.800908225s" Oct 8 19:30:06.322729 containerd[2022]: time="2024-10-08T19:30:06.322698655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\"" Oct 8 19:30:06.364243 containerd[2022]: time="2024-10-08T19:30:06.364177572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 19:30:06.481326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:30:06.488580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:30:07.120007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:07.135828 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:30:07.230798 kubelet[2557]: E1008 19:30:07.230676 2557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:30:07.237280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:30:07.237617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:30:08.143097 containerd[2022]: time="2024-10-08T19:30:08.143017400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:08.145166 containerd[2022]: time="2024-10-08T19:30:08.145097331Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=16154272" Oct 8 19:30:08.146370 containerd[2022]: time="2024-10-08T19:30:08.146284581Z" level=info msg="ImageCreate event name:\"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:08.153690 containerd[2022]: time="2024-10-08T19:30:08.153594770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:08.155933 containerd[2022]: time="2024-10-08T19:30:08.155757999Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"17642104\" in 1.791230224s" Oct 8 19:30:08.155933 containerd[2022]: time="2024-10-08T19:30:08.155812062Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\"" Oct 8 19:30:08.194422 containerd[2022]: time="2024-10-08T19:30:08.194375430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 19:30:09.664961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054980454.mount: Deactivated successfully. Oct 8 19:30:10.145293 containerd[2022]: time="2024-10-08T19:30:10.145235919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:10.147648 containerd[2022]: time="2024-10-08T19:30:10.147584112Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=25648341" Oct 8 19:30:10.149495 containerd[2022]: time="2024-10-08T19:30:10.149431415Z" level=info msg="ImageCreate event name:\"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:10.152671 containerd[2022]: time="2024-10-08T19:30:10.152594636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:10.154369 containerd[2022]: time="2024-10-08T19:30:10.154168479Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"25647360\" in 1.959564959s" Oct 8 19:30:10.154369 containerd[2022]: time="2024-10-08T19:30:10.154249027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\"" Oct 8 19:30:10.195010 containerd[2022]: time="2024-10-08T19:30:10.194928261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:30:10.795905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181840592.mount: Deactivated successfully. Oct 8 19:30:12.104530 containerd[2022]: time="2024-10-08T19:30:12.104449642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:12.150423 containerd[2022]: time="2024-10-08T19:30:12.150357981Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Oct 8 19:30:12.175478 containerd[2022]: time="2024-10-08T19:30:12.175378833Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:12.200697 containerd[2022]: time="2024-10-08T19:30:12.199918941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:12.202694 containerd[2022]: time="2024-10-08T19:30:12.202429420Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.007420154s" Oct 8 19:30:12.202694 containerd[2022]: time="2024-10-08T19:30:12.202495873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:30:12.246296 containerd[2022]: time="2024-10-08T19:30:12.246247658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:30:12.819450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571679099.mount: Deactivated successfully. Oct 8 19:30:12.830132 containerd[2022]: time="2024-10-08T19:30:12.830059366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:12.832819 containerd[2022]: time="2024-10-08T19:30:12.832754053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Oct 8 19:30:12.834830 containerd[2022]: time="2024-10-08T19:30:12.834755753Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:12.839332 containerd[2022]: time="2024-10-08T19:30:12.839268172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:12.841273 containerd[2022]: time="2024-10-08T19:30:12.841075556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 594.431099ms" Oct 8 19:30:12.841273 containerd[2022]: time="2024-10-08T19:30:12.841126593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:30:12.880955 containerd[2022]: time="2024-10-08T19:30:12.880885699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 19:30:13.450566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653035094.mount: Deactivated successfully. Oct 8 19:30:16.344780 containerd[2022]: time="2024-10-08T19:30:16.344719663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:16.376106 containerd[2022]: time="2024-10-08T19:30:16.376041780Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Oct 8 19:30:16.397247 containerd[2022]: time="2024-10-08T19:30:16.397135218Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:16.435664 containerd[2022]: time="2024-10-08T19:30:16.435085007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:16.438367 containerd[2022]: time="2024-10-08T19:30:16.437644566Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.556694119s" Oct 8 19:30:16.438715 containerd[2022]: time="2024-10-08T19:30:16.438505373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Oct 8 19:30:17.482375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:30:17.491390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:30:18.600465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:18.618818 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:30:18.703070 kubelet[2757]: E1008 19:30:18.702973 2757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:30:18.708230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:30:18.708648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:30:24.012407 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 8 19:30:24.747716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:24.755716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:30:24.798633 systemd[1]: Reloading requested from client PID 2775 ('systemctl') (unit session-7.scope)... Oct 8 19:30:24.798664 systemd[1]: Reloading... Oct 8 19:30:24.987262 zram_generator::config[2816]: No configuration found. Oct 8 19:30:25.209408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:30:25.384005 systemd[1]: Reloading finished in 584 ms. Oct 8 19:30:25.469007 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:30:25.469401 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:30:25.470019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:25.478856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:30:26.305066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:26.321983 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:30:26.402251 kubelet[2875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:30:26.402251 kubelet[2875]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:30:26.402251 kubelet[2875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:30:26.402251 kubelet[2875]: I1008 19:30:26.400697 2875 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:30:28.392072 kubelet[2875]: I1008 19:30:28.392007 2875 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:30:28.392072 kubelet[2875]: I1008 19:30:28.392058 2875 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:30:28.392810 kubelet[2875]: I1008 19:30:28.392532 2875 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:30:28.419380 kubelet[2875]: I1008 19:30:28.418972 2875 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:30:28.419380 kubelet[2875]: E1008 19:30:28.419331 2875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.433148 kubelet[2875]: I1008 19:30:28.433105 2875 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:30:28.435479 kubelet[2875]: I1008 19:30:28.435410 2875 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:30:28.435801 kubelet[2875]: I1008 19:30:28.435471 2875 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-181","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:30:28.435977 kubelet[2875]: I1008 19:30:28.435819 2875 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:30:28.435977 kubelet[2875]: I1008 19:30:28.435840 2875 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:30:28.436137 kubelet[2875]: I1008 19:30:28.436107 2875 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:30:28.437632 kubelet[2875]: I1008 19:30:28.437585 2875 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:30:28.437632 kubelet[2875]: I1008 19:30:28.437629 2875 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:30:28.439489 kubelet[2875]: I1008 19:30:28.437705 2875 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:30:28.439489 kubelet[2875]: I1008 19:30:28.437726 2875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:30:28.442242 kubelet[2875]: I1008 19:30:28.439764 2875 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:30:28.442242 kubelet[2875]: I1008 19:30:28.440134 2875 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:30:28.442242 kubelet[2875]: W1008 19:30:28.440261 2875 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:30:28.442242 kubelet[2875]: I1008 19:30:28.441314 2875 server.go:1264] "Started kubelet" Oct 8 19:30:28.442242 kubelet[2875]: W1008 19:30:28.441517 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-181&limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.442242 kubelet[2875]: E1008 19:30:28.441599 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-181&limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.442242 kubelet[2875]: W1008 19:30:28.441703 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.442242 kubelet[2875]: E1008 19:30:28.441753 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.450232 kubelet[2875]: I1008 19:30:28.449369 2875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:30:28.460151 kubelet[2875]: I1008 19:30:28.460070 2875 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:30:28.460435 kubelet[2875]: I1008 19:30:28.460414 2875 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:30:28.464406 kubelet[2875]: I1008 19:30:28.464352 2875 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:30:28.467158 kubelet[2875]: I1008 19:30:28.467085 2875 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:30:28.469725 kubelet[2875]: I1008 19:30:28.469670 2875 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:30:28.470014 kubelet[2875]: I1008 19:30:28.469946 2875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:30:28.470485 kubelet[2875]: I1008 19:30:28.470452 2875 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:30:28.472333 kubelet[2875]: E1008 19:30:28.472256 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-181?timeout=10s\": dial tcp 172.31.26.181:6443: connect: connection refused" interval="200ms" Oct 8 19:30:28.472925 kubelet[2875]: E1008 19:30:28.472631 2875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.181:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-181.17fc9109d667aebc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-181,UID:ip-172-31-26-181,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-181,},FirstTimestamp:2024-10-08 19:30:28.441280188 +0000 UTC m=+2.112390439,LastTimestamp:2024-10-08 19:30:28.441280188 +0000 UTC m=+2.112390439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-181,}" Oct 8 19:30:28.474615 kubelet[2875]: I1008 19:30:28.474452 2875 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:30:28.476061 kubelet[2875]: W1008 19:30:28.475333 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.476061 kubelet[2875]: E1008 19:30:28.475447 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.480510 kubelet[2875]: I1008 19:30:28.480449 2875 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:30:28.480721 kubelet[2875]: I1008 19:30:28.480702 2875 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:30:28.490590 kubelet[2875]: E1008 19:30:28.490535 2875 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:30:28.503985 kubelet[2875]: I1008 19:30:28.503895 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:30:28.506058 kubelet[2875]: I1008 19:30:28.506002 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:30:28.506261 kubelet[2875]: I1008 19:30:28.506108 2875 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:30:28.506261 kubelet[2875]: I1008 19:30:28.506142 2875 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:30:28.506427 kubelet[2875]: E1008 19:30:28.506241 2875 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:30:28.515018 kubelet[2875]: W1008 19:30:28.514941 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.515018 kubelet[2875]: E1008 19:30:28.515022 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:28.515722 kubelet[2875]: I1008 19:30:28.515701 2875 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:30:28.515781 kubelet[2875]: I1008 19:30:28.515724 2875 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:30:28.515781 kubelet[2875]: I1008 19:30:28.515752 2875 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:30:28.563407 kubelet[2875]: I1008 19:30:28.563349 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-181" Oct 8 19:30:28.563860 kubelet[2875]: E1008 19:30:28.563813 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.181:6443/api/v1/nodes\": dial tcp 172.31.26.181:6443: connect: connection refused" node="ip-172-31-26-181" Oct 8 19:30:28.606671 kubelet[2875]: E1008 19:30:28.606630 2875 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:30:28.628225 kubelet[2875]: I1008 19:30:28.628067 2875 policy_none.go:49] "None policy: Start" Oct 8 19:30:28.629144 kubelet[2875]: I1008 19:30:28.629105 2875 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:30:28.629327 kubelet[2875]: I1008 19:30:28.629152 2875 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:30:28.686260 kubelet[2875]: E1008 19:30:28.673596 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-181?timeout=10s\": dial tcp 172.31.26.181:6443: connect: connection refused" interval="400ms" Oct 8 19:30:28.704325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:30:28.720979 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:30:28.728267 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:30:28.744976 kubelet[2875]: I1008 19:30:28.743839 2875 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:30:28.744976 kubelet[2875]: I1008 19:30:28.744145 2875 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:30:28.744976 kubelet[2875]: I1008 19:30:28.744326 2875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:30:28.747642 kubelet[2875]: E1008 19:30:28.747074 2875 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-181\" not found" Oct 8 19:30:28.766139 kubelet[2875]: I1008 19:30:28.766060 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-181" Oct 8 19:30:28.766760 kubelet[2875]: E1008 19:30:28.766713 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.181:6443/api/v1/nodes\": dial tcp 172.31.26.181:6443: connect: connection refused" node="ip-172-31-26-181" Oct 8 19:30:28.807542 kubelet[2875]: I1008 19:30:28.807163 2875 topology_manager.go:215] "Topology Admit Handler" podUID="750bed91a592fddd9d104849f4bf1847" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-181" Oct 8 19:30:28.809698 kubelet[2875]: I1008 19:30:28.809659 2875 topology_manager.go:215] "Topology Admit Handler" podUID="255171911dbdfc7d1a4087ea99f115bb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:28.813528 kubelet[2875]: I1008 19:30:28.812838 2875 topology_manager.go:215] "Topology Admit Handler" podUID="86303169b4a767e7bfbfdce59dc5f76a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-181" Oct 8 19:30:28.830894 systemd[1]: Created slice kubepods-burstable-pod750bed91a592fddd9d104849f4bf1847.slice - libcontainer container kubepods-burstable-pod750bed91a592fddd9d104849f4bf1847.slice. Oct 8 19:30:28.846612 systemd[1]: Created slice kubepods-burstable-pod255171911dbdfc7d1a4087ea99f115bb.slice - libcontainer container kubepods-burstable-pod255171911dbdfc7d1a4087ea99f115bb.slice. Oct 8 19:30:28.855470 systemd[1]: Created slice kubepods-burstable-pod86303169b4a767e7bfbfdce59dc5f76a.slice - libcontainer container kubepods-burstable-pod86303169b4a767e7bfbfdce59dc5f76a.slice. Oct 8 19:30:28.872257 kubelet[2875]: I1008 19:30:28.872144 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:28.872257 kubelet[2875]: I1008 19:30:28.872234 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:28.872768 kubelet[2875]: I1008 19:30:28.872281 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/750bed91a592fddd9d104849f4bf1847-ca-certs\") pod \"kube-apiserver-ip-172-31-26-181\" (UID: \"750bed91a592fddd9d104849f4bf1847\") " pod="kube-system/kube-apiserver-ip-172-31-26-181" Oct 8 19:30:28.872768 kubelet[2875]: I1008 19:30:28.872319 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/750bed91a592fddd9d104849f4bf1847-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-181\" (UID: \"750bed91a592fddd9d104849f4bf1847\") " pod="kube-system/kube-apiserver-ip-172-31-26-181" Oct 8 19:30:28.872768 kubelet[2875]: I1008 19:30:28.872366 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/750bed91a592fddd9d104849f4bf1847-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-181\" (UID: \"750bed91a592fddd9d104849f4bf1847\") " pod="kube-system/kube-apiserver-ip-172-31-26-181" Oct 8 19:30:28.872768 kubelet[2875]: I1008 19:30:28.872418 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:28.872768 kubelet[2875]: I1008 19:30:28.872459 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:28.873024 kubelet[2875]: I1008 19:30:28.872495 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:28.873024 kubelet[2875]: I1008 19:30:28.872531 2875 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86303169b4a767e7bfbfdce59dc5f76a-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-181\" (UID: \"86303169b4a767e7bfbfdce59dc5f76a\") " pod="kube-system/kube-scheduler-ip-172-31-26-181" Oct 8 19:30:29.074935 kubelet[2875]: E1008 19:30:29.074864 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-181?timeout=10s\": dial tcp 172.31.26.181:6443: connect: connection refused" interval="800ms" Oct 8 19:30:29.142148 containerd[2022]: time="2024-10-08T19:30:29.142051852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-181,Uid:750bed91a592fddd9d104849f4bf1847,Namespace:kube-system,Attempt:0,}" Oct 8 19:30:29.154015 containerd[2022]: time="2024-10-08T19:30:29.153910022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-181,Uid:255171911dbdfc7d1a4087ea99f115bb,Namespace:kube-system,Attempt:0,}" Oct 8 19:30:29.161275 containerd[2022]: time="2024-10-08T19:30:29.160929882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-181,Uid:86303169b4a767e7bfbfdce59dc5f76a,Namespace:kube-system,Attempt:0,}" Oct 8 19:30:29.170323 kubelet[2875]: I1008 19:30:29.170267 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-181" Oct 8 19:30:29.171266 kubelet[2875]: E1008 19:30:29.171181 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.181:6443/api/v1/nodes\": dial tcp 172.31.26.181:6443: connect: connection refused" node="ip-172-31-26-181" Oct 8 19:30:29.342513 kubelet[2875]: W1008 19:30:29.342370 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.342513 kubelet[2875]: E1008 19:30:29.342436 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.695828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898277901.mount: Deactivated successfully. Oct 8 19:30:29.708738 containerd[2022]: time="2024-10-08T19:30:29.708660150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:30:29.710459 containerd[2022]: time="2024-10-08T19:30:29.710398487Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:30:29.712157 containerd[2022]: time="2024-10-08T19:30:29.712103411Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:30:29.713583 kubelet[2875]: W1008 19:30:29.713456 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-181&limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.713583 kubelet[2875]: E1008 19:30:29.713549 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-181&limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.714594 containerd[2022]: time="2024-10-08T19:30:29.714526054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:30:29.715513 containerd[2022]: time="2024-10-08T19:30:29.715461934Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:30:29.716813 containerd[2022]: time="2024-10-08T19:30:29.716770384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Oct 8 19:30:29.718505 containerd[2022]: time="2024-10-08T19:30:29.718423742Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:30:29.732242 containerd[2022]: time="2024-10-08T19:30:29.731245069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:30:29.733027 containerd[2022]: time="2024-10-08T19:30:29.732974750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.91004ms" Oct 8 19:30:29.738224 containerd[2022]: time="2024-10-08T19:30:29.738145314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.105435ms" Oct 8 19:30:29.752088 containerd[2022]: time="2024-10-08T19:30:29.752005701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.822119ms" Oct 8 19:30:29.841440 kubelet[2875]: W1008 19:30:29.841307 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.841440 kubelet[2875]: E1008 19:30:29.841372 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.843157 kubelet[2875]: W1008 19:30:29.843030 2875 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.843157 kubelet[2875]: E1008 19:30:29.843122 2875 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:29.875735 kubelet[2875]: E1008 19:30:29.875677 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-181?timeout=10s\": dial tcp 172.31.26.181:6443: connect: connection refused" interval="1.6s" Oct 8 19:30:29.966402 containerd[2022]: time="2024-10-08T19:30:29.965897731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:30:29.966402 containerd[2022]: time="2024-10-08T19:30:29.965999830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:29.966402 containerd[2022]: time="2024-10-08T19:30:29.966031586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:30:29.966402 containerd[2022]: time="2024-10-08T19:30:29.966056270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:29.975920 kubelet[2875]: I1008 19:30:29.975585 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-181" Oct 8 19:30:29.976292 kubelet[2875]: E1008 19:30:29.976160 2875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.181:6443/api/v1/nodes\": dial tcp 172.31.26.181:6443: connect: connection refused" node="ip-172-31-26-181" Oct 8 19:30:29.982703 containerd[2022]: time="2024-10-08T19:30:29.982445445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:30:29.982703 containerd[2022]: time="2024-10-08T19:30:29.982558925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:29.983104 containerd[2022]: time="2024-10-08T19:30:29.982658719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:30:29.983104 containerd[2022]: time="2024-10-08T19:30:29.982985234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:29.998257 containerd[2022]: time="2024-10-08T19:30:29.995231330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:30:29.998257 containerd[2022]: time="2024-10-08T19:30:29.995318446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:29.998257 containerd[2022]: time="2024-10-08T19:30:29.995349433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:30:29.998257 containerd[2022]: time="2024-10-08T19:30:29.995373746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:30.007578 systemd[1]: Started cri-containerd-9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7.scope - libcontainer container 9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7. Oct 8 19:30:30.051295 systemd[1]: Started cri-containerd-f45e71363a99044c2911db2bacb315cea8e017ccddefd1db3c9b32596a047a93.scope - libcontainer container f45e71363a99044c2911db2bacb315cea8e017ccddefd1db3c9b32596a047a93. Oct 8 19:30:30.065153 systemd[1]: Started cri-containerd-710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7.scope - libcontainer container 710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7. Oct 8 19:30:30.119824 containerd[2022]: time="2024-10-08T19:30:30.119772295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-181,Uid:255171911dbdfc7d1a4087ea99f115bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7\"" Oct 8 19:30:30.133099 containerd[2022]: time="2024-10-08T19:30:30.132891983Z" level=info msg="CreateContainer within sandbox \"9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:30:30.172285 containerd[2022]: time="2024-10-08T19:30:30.172175159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-181,Uid:86303169b4a767e7bfbfdce59dc5f76a,Namespace:kube-system,Attempt:0,} returns sandbox id \"710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7\"" Oct 8 19:30:30.179295 containerd[2022]: time="2024-10-08T19:30:30.179104807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-181,Uid:750bed91a592fddd9d104849f4bf1847,Namespace:kube-system,Attempt:0,} returns sandbox id \"f45e71363a99044c2911db2bacb315cea8e017ccddefd1db3c9b32596a047a93\"" Oct 8 19:30:30.189529 containerd[2022]: time="2024-10-08T19:30:30.189476370Z" level=info msg="CreateContainer within sandbox \"710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:30:30.193623 containerd[2022]: time="2024-10-08T19:30:30.193093934Z" level=info msg="CreateContainer within sandbox \"f45e71363a99044c2911db2bacb315cea8e017ccddefd1db3c9b32596a047a93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:30:30.201665 containerd[2022]: time="2024-10-08T19:30:30.201601541Z" level=info msg="CreateContainer within sandbox \"9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61\"" Oct 8 19:30:30.204226 containerd[2022]: time="2024-10-08T19:30:30.203601561Z" level=info msg="StartContainer for \"ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61\"" Oct 8 19:30:30.225030 containerd[2022]: time="2024-10-08T19:30:30.224864487Z" level=info msg="CreateContainer within sandbox \"710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7\"" Oct 8 19:30:30.226451 containerd[2022]: time="2024-10-08T19:30:30.226405217Z" level=info msg="StartContainer for \"f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7\"" Oct 8 19:30:30.234325 containerd[2022]: time="2024-10-08T19:30:30.234254320Z" level=info msg="CreateContainer within sandbox \"f45e71363a99044c2911db2bacb315cea8e017ccddefd1db3c9b32596a047a93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ad354deef700065f3c24c0ae7f5d2fd3fd7195b940f60167921759c0701836bd\"" Oct 8 19:30:30.235128 containerd[2022]: time="2024-10-08T19:30:30.235063188Z" level=info msg="StartContainer for \"ad354deef700065f3c24c0ae7f5d2fd3fd7195b940f60167921759c0701836bd\"" Oct 8 19:30:30.263398 systemd[1]: Started cri-containerd-ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61.scope - libcontainer container ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61. Oct 8 19:30:30.303721 systemd[1]: Started cri-containerd-f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7.scope - libcontainer container f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7. Oct 8 19:30:30.320602 systemd[1]: Started cri-containerd-ad354deef700065f3c24c0ae7f5d2fd3fd7195b940f60167921759c0701836bd.scope - libcontainer container ad354deef700065f3c24c0ae7f5d2fd3fd7195b940f60167921759c0701836bd. Oct 8 19:30:30.410038 containerd[2022]: time="2024-10-08T19:30:30.409380320Z" level=info msg="StartContainer for \"f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7\" returns successfully" Oct 8 19:30:30.421696 containerd[2022]: time="2024-10-08T19:30:30.421638759Z" level=info msg="StartContainer for \"ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61\" returns successfully" Oct 8 19:30:30.458982 containerd[2022]: time="2024-10-08T19:30:30.458888334Z" level=info msg="StartContainer for \"ad354deef700065f3c24c0ae7f5d2fd3fd7195b940f60167921759c0701836bd\" returns successfully" Oct 8 19:30:30.529907 kubelet[2875]: E1008 19:30:30.529745 2875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.181:6443: connect: connection refused Oct 8 19:30:31.579468 kubelet[2875]: I1008 19:30:31.579420 2875 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-181" Oct 8 19:30:34.616025 kubelet[2875]: E1008 19:30:34.615963 2875 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-181\" not found" node="ip-172-31-26-181" Oct 8 19:30:34.665634 kubelet[2875]: I1008 19:30:34.665481 2875 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-181" Oct 8 19:30:35.443786 kubelet[2875]: I1008 19:30:35.443484 2875 apiserver.go:52] "Watching apiserver" Oct 8 19:30:35.467672 kubelet[2875]: I1008 19:30:35.467633 2875 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:30:36.769452 systemd[1]: Reloading requested from client PID 3150 ('systemctl') (unit session-7.scope)... Oct 8 19:30:36.769484 systemd[1]: Reloading... Oct 8 19:30:36.962413 zram_generator::config[3194]: No configuration found. Oct 8 19:30:37.197321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:30:37.414750 systemd[1]: Reloading finished in 644 ms. Oct 8 19:30:37.504242 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:30:37.515232 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:30:37.515898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:37.516094 systemd[1]: kubelet.service: Consumed 2.798s CPU time, 113.7M memory peak, 0B memory swap peak. Oct 8 19:30:37.527892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:30:38.368472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:30:38.383369 (kubelet)[3248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:30:38.468179 kubelet[3248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:30:38.468179 kubelet[3248]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:30:38.468179 kubelet[3248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:30:38.468878 kubelet[3248]: I1008 19:30:38.468392 3248 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:30:38.478397 kubelet[3248]: I1008 19:30:38.477848 3248 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:30:38.478397 kubelet[3248]: I1008 19:30:38.477893 3248 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:30:38.478397 kubelet[3248]: I1008 19:30:38.478249 3248 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:30:38.481417 kubelet[3248]: I1008 19:30:38.481349 3248 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:30:38.484539 kubelet[3248]: I1008 19:30:38.484102 3248 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:30:38.502378 kubelet[3248]: I1008 19:30:38.501016 3248 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:30:38.502743 kubelet[3248]: I1008 19:30:38.502666 3248 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:30:38.504048 kubelet[3248]: I1008 19:30:38.502739 3248 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-181","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:30:38.504511 kubelet[3248]: I1008 19:30:38.504482 3248 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:30:38.504646 kubelet[3248]: I1008 19:30:38.504626 3248 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:30:38.504890 kubelet[3248]: I1008 19:30:38.504819 3248 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:30:38.505762 kubelet[3248]: I1008 19:30:38.505728 3248 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:30:38.506677 kubelet[3248]: I1008 19:30:38.506643 3248 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:30:38.506891 kubelet[3248]: I1008 19:30:38.506870 3248 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:30:38.507039 kubelet[3248]: I1008 19:30:38.507017 3248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:30:38.510743 kubelet[3248]: I1008 19:30:38.510693 3248 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:30:38.511186 kubelet[3248]: I1008 19:30:38.511033 3248 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:30:38.512355 kubelet[3248]: I1008 19:30:38.511722 3248 server.go:1264] "Started kubelet" Oct 8 19:30:38.520103 kubelet[3248]: I1008 19:30:38.519743 3248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:30:38.534245 kubelet[3248]: I1008 19:30:38.532612 3248 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:30:38.540217 kubelet[3248]: I1008 19:30:38.538639 3248 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:30:38.543232 kubelet[3248]: I1008 19:30:38.542940 3248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:30:38.563208 kubelet[3248]: I1008 19:30:38.563158 3248 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:30:38.564218 kubelet[3248]: I1008 19:30:38.553787 3248 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:30:38.564218 kubelet[3248]: I1008 19:30:38.553758 3248 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:30:38.564687 kubelet[3248]: I1008 19:30:38.564664 3248 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:30:38.589468 kubelet[3248]: I1008 19:30:38.589383 3248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:30:38.596881 update_engine[1991]: I1008 19:30:38.595618 1991 update_attempter.cc:509] Updating boot flags... Oct 8 19:30:38.602843 kubelet[3248]: I1008 19:30:38.602384 3248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:30:38.607230 kubelet[3248]: I1008 19:30:38.606695 3248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:30:38.607230 kubelet[3248]: I1008 19:30:38.606770 3248 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:30:38.607230 kubelet[3248]: I1008 19:30:38.606811 3248 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:30:38.607230 kubelet[3248]: E1008 19:30:38.606884 3248 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:30:38.678018 kubelet[3248]: I1008 19:30:38.677711 3248 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:30:38.678018 kubelet[3248]: I1008 19:30:38.677752 3248 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:30:38.681038 kubelet[3248]: E1008 19:30:38.680980 3248 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Oct 8 19:30:38.690929 kubelet[3248]: E1008 19:30:38.690883 3248 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:30:38.691501 kubelet[3248]: I1008 19:30:38.691462 3248 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-181" Oct 8 19:30:38.711152 kubelet[3248]: E1008 19:30:38.708953 3248 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:30:38.721356 kubelet[3248]: I1008 19:30:38.720536 3248 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-26-181" Oct 8 19:30:38.721356 kubelet[3248]: I1008 19:30:38.721170 3248 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-181" Oct 8 19:30:38.844247 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3299) Oct 8 19:30:38.893664 kubelet[3248]: I1008 19:30:38.893521 3248 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:30:38.893664 kubelet[3248]: I1008 19:30:38.893551 3248 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:30:38.893664 kubelet[3248]: I1008 19:30:38.893609 3248 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:30:38.894494 kubelet[3248]: I1008 19:30:38.894426 3248 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:30:38.894687 kubelet[3248]: I1008 19:30:38.894642 3248 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:30:38.894875 kubelet[3248]: I1008 19:30:38.894801 3248 policy_none.go:49] "None policy: Start" Oct 8 19:30:38.898975 kubelet[3248]: I1008 19:30:38.897668 3248 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:30:38.898975 kubelet[3248]: I1008 19:30:38.897713 3248 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:30:38.898975 kubelet[3248]: I1008 19:30:38.898005 3248 state_mem.go:75] "Updated machine memory state" Oct 8 19:30:38.910908 kubelet[3248]: E1008 19:30:38.910577 3248 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:30:38.917839 kubelet[3248]: I1008 19:30:38.917802 3248 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:30:38.920984 kubelet[3248]: I1008 19:30:38.920163 3248 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:30:38.920984 kubelet[3248]: I1008 19:30:38.920748 3248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:30:39.293381 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3301) Oct 8 19:30:39.313329 kubelet[3248]: I1008 19:30:39.310756 3248 topology_manager.go:215] "Topology Admit Handler" podUID="750bed91a592fddd9d104849f4bf1847" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-181" Oct 8 19:30:39.313329 kubelet[3248]: I1008 19:30:39.310940 3248 topology_manager.go:215] "Topology Admit Handler" podUID="255171911dbdfc7d1a4087ea99f115bb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:39.313329 kubelet[3248]: I1008 19:30:39.311018 3248 topology_manager.go:215] "Topology Admit Handler" podUID="86303169b4a767e7bfbfdce59dc5f76a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-181" Oct 8 19:30:39.373149 kubelet[3248]: I1008 19:30:39.373086 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:39.373332 kubelet[3248]: I1008 19:30:39.373160 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:39.373332 kubelet[3248]: I1008 19:30:39.373278 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:39.373332 kubelet[3248]: I1008 19:30:39.373320 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/750bed91a592fddd9d104849f4bf1847-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-181\" (UID: \"750bed91a592fddd9d104849f4bf1847\") " pod="kube-system/kube-apiserver-ip-172-31-26-181" Oct 8 19:30:39.373639 kubelet[3248]: I1008 19:30:39.373357 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/750bed91a592fddd9d104849f4bf1847-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-181\" (UID: \"750bed91a592fddd9d104849f4bf1847\") " pod="kube-system/kube-apiserver-ip-172-31-26-181" Oct 8 19:30:39.373639 kubelet[3248]: I1008 19:30:39.373391 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:39.373639 kubelet[3248]: I1008 19:30:39.373425 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/255171911dbdfc7d1a4087ea99f115bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-181\" (UID: \"255171911dbdfc7d1a4087ea99f115bb\") " pod="kube-system/kube-controller-manager-ip-172-31-26-181" Oct 8 19:30:39.373639 kubelet[3248]: I1008 19:30:39.373463 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86303169b4a767e7bfbfdce59dc5f76a-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-181\" (UID: \"86303169b4a767e7bfbfdce59dc5f76a\") " pod="kube-system/kube-scheduler-ip-172-31-26-181" Oct 8 19:30:39.373639 kubelet[3248]: I1008 19:30:39.373496 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/750bed91a592fddd9d104849f4bf1847-ca-certs\") pod \"kube-apiserver-ip-172-31-26-181\" (UID: \"750bed91a592fddd9d104849f4bf1847\") " pod="kube-system/kube-apiserver-ip-172-31-26-181" Oct 8 19:30:39.532599 kubelet[3248]: I1008 19:30:39.525223 3248 apiserver.go:52] "Watching apiserver" Oct 8 19:30:39.564560 kubelet[3248]: I1008 19:30:39.564382 3248 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:30:39.738808 kubelet[3248]: I1008 19:30:39.738671 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-181" podStartSLOduration=0.738647908 podStartE2EDuration="738.647908ms" podCreationTimestamp="2024-10-08 19:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:30:39.724698533 +0000 UTC m=+1.333336438" watchObservedRunningTime="2024-10-08 19:30:39.738647908 +0000 UTC m=+1.347285789" Oct 8 19:30:39.739110 kubelet[3248]: I1008 19:30:39.738975 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-181" podStartSLOduration=0.738962969 podStartE2EDuration="738.962969ms" podCreationTimestamp="2024-10-08 19:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:30:39.73805881 +0000 UTC m=+1.346696727" watchObservedRunningTime="2024-10-08 19:30:39.738962969 +0000 UTC m=+1.347600850" Oct 8 19:30:39.780480 kubelet[3248]: I1008 19:30:39.780293 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-181" podStartSLOduration=0.780273419 podStartE2EDuration="780.273419ms" podCreationTimestamp="2024-10-08 19:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:30:39.763078245 +0000 UTC m=+1.371716138" watchObservedRunningTime="2024-10-08 19:30:39.780273419 +0000 UTC m=+1.388911300" Oct 8 19:30:43.425021 sudo[2334]: pam_unix(sudo:session): session closed for user root Oct 8 19:30:43.448625 sshd[2331]: pam_unix(sshd:session): session closed for user core Oct 8 19:30:43.455840 systemd[1]: sshd@6-172.31.26.181:22-139.178.68.195:49282.service: Deactivated successfully. Oct 8 19:30:43.459392 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:30:43.459809 systemd[1]: session-7.scope: Consumed 11.497s CPU time, 134.3M memory peak, 0B memory swap peak. Oct 8 19:30:43.461115 systemd-logind[1990]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:30:43.463695 systemd-logind[1990]: Removed session 7. Oct 8 19:30:50.479323 kubelet[3248]: I1008 19:30:50.479241 3248 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:30:50.481501 containerd[2022]: time="2024-10-08T19:30:50.481429477Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:30:50.482915 kubelet[3248]: I1008 19:30:50.482644 3248 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:30:51.368013 kubelet[3248]: I1008 19:30:51.366067 3248 topology_manager.go:215] "Topology Admit Handler" podUID="49caa44a-2edf-40d5-8dcd-3bd51bb6c29a" podNamespace="kube-system" podName="kube-proxy-27rw5" Oct 8 19:30:51.386803 systemd[1]: Created slice kubepods-besteffort-pod49caa44a_2edf_40d5_8dcd_3bd51bb6c29a.slice - libcontainer container kubepods-besteffort-pod49caa44a_2edf_40d5_8dcd_3bd51bb6c29a.slice. Oct 8 19:30:51.450410 kubelet[3248]: I1008 19:30:51.450125 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49caa44a-2edf-40d5-8dcd-3bd51bb6c29a-kube-proxy\") pod \"kube-proxy-27rw5\" (UID: \"49caa44a-2edf-40d5-8dcd-3bd51bb6c29a\") " pod="kube-system/kube-proxy-27rw5" Oct 8 19:30:51.450410 kubelet[3248]: I1008 19:30:51.450219 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49caa44a-2edf-40d5-8dcd-3bd51bb6c29a-xtables-lock\") pod \"kube-proxy-27rw5\" (UID: \"49caa44a-2edf-40d5-8dcd-3bd51bb6c29a\") " pod="kube-system/kube-proxy-27rw5" Oct 8 19:30:51.450410 kubelet[3248]: I1008 19:30:51.450261 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49caa44a-2edf-40d5-8dcd-3bd51bb6c29a-lib-modules\") pod \"kube-proxy-27rw5\" (UID: \"49caa44a-2edf-40d5-8dcd-3bd51bb6c29a\") " pod="kube-system/kube-proxy-27rw5" Oct 8 19:30:51.450410 kubelet[3248]: I1008 19:30:51.450306 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnnw\" (UniqueName: \"kubernetes.io/projected/49caa44a-2edf-40d5-8dcd-3bd51bb6c29a-kube-api-access-bbnnw\") pod \"kube-proxy-27rw5\" (UID: \"49caa44a-2edf-40d5-8dcd-3bd51bb6c29a\") " pod="kube-system/kube-proxy-27rw5" Oct 8 19:30:51.635608 kubelet[3248]: I1008 19:30:51.635284 3248 topology_manager.go:215] "Topology Admit Handler" podUID="e1533211-1ee3-46f5-bdc8-48ced861e8c2" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-jp5mt" Oct 8 19:30:51.658860 systemd[1]: Created slice kubepods-besteffort-pode1533211_1ee3_46f5_bdc8_48ced861e8c2.slice - libcontainer container kubepods-besteffort-pode1533211_1ee3_46f5_bdc8_48ced861e8c2.slice. Oct 8 19:30:51.702693 containerd[2022]: time="2024-10-08T19:30:51.702624902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27rw5,Uid:49caa44a-2edf-40d5-8dcd-3bd51bb6c29a,Namespace:kube-system,Attempt:0,}" Oct 8 19:30:51.747218 containerd[2022]: time="2024-10-08T19:30:51.747015743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:30:51.747560 containerd[2022]: time="2024-10-08T19:30:51.747166430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:51.747560 containerd[2022]: time="2024-10-08T19:30:51.747265768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:30:51.747560 containerd[2022]: time="2024-10-08T19:30:51.747305820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:51.753544 kubelet[3248]: I1008 19:30:51.753490 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7l9j\" (UniqueName: \"kubernetes.io/projected/e1533211-1ee3-46f5-bdc8-48ced861e8c2-kube-api-access-k7l9j\") pod \"tigera-operator-77f994b5bb-jp5mt\" (UID: \"e1533211-1ee3-46f5-bdc8-48ced861e8c2\") " pod="tigera-operator/tigera-operator-77f994b5bb-jp5mt" Oct 8 19:30:51.755226 kubelet[3248]: I1008 19:30:51.753896 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1533211-1ee3-46f5-bdc8-48ced861e8c2-var-lib-calico\") pod \"tigera-operator-77f994b5bb-jp5mt\" (UID: \"e1533211-1ee3-46f5-bdc8-48ced861e8c2\") " pod="tigera-operator/tigera-operator-77f994b5bb-jp5mt" Oct 8 19:30:51.791517 systemd[1]: Started cri-containerd-b5f6907764512f90522cc380df6116355cd533d03d2f34ac3d1746b9f9967391.scope - libcontainer container b5f6907764512f90522cc380df6116355cd533d03d2f34ac3d1746b9f9967391. Oct 8 19:30:51.833247 containerd[2022]: time="2024-10-08T19:30:51.833027292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27rw5,Uid:49caa44a-2edf-40d5-8dcd-3bd51bb6c29a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5f6907764512f90522cc380df6116355cd533d03d2f34ac3d1746b9f9967391\"" Oct 8 19:30:51.841580 containerd[2022]: time="2024-10-08T19:30:51.841372542Z" level=info msg="CreateContainer within sandbox \"b5f6907764512f90522cc380df6116355cd533d03d2f34ac3d1746b9f9967391\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:30:51.966232 containerd[2022]: time="2024-10-08T19:30:51.965956200Z" level=info msg="CreateContainer within sandbox \"b5f6907764512f90522cc380df6116355cd533d03d2f34ac3d1746b9f9967391\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a704e03879c961981e90a939f49031e3a4d0bb0f1dad41abce614c25bec838a9\"" Oct 8 19:30:51.969499 containerd[2022]: time="2024-10-08T19:30:51.967666779Z" level=info msg="StartContainer for \"a704e03879c961981e90a939f49031e3a4d0bb0f1dad41abce614c25bec838a9\"" Oct 8 19:30:51.969499 containerd[2022]: time="2024-10-08T19:30:51.967672241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-jp5mt,Uid:e1533211-1ee3-46f5-bdc8-48ced861e8c2,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:30:52.030011 systemd[1]: Started cri-containerd-a704e03879c961981e90a939f49031e3a4d0bb0f1dad41abce614c25bec838a9.scope - libcontainer container a704e03879c961981e90a939f49031e3a4d0bb0f1dad41abce614c25bec838a9. Oct 8 19:30:52.034177 containerd[2022]: time="2024-10-08T19:30:52.034012319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:30:52.034177 containerd[2022]: time="2024-10-08T19:30:52.034100984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:52.034177 containerd[2022]: time="2024-10-08T19:30:52.034134168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:30:52.034744 containerd[2022]: time="2024-10-08T19:30:52.034165264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:30:52.078493 systemd[1]: Started cri-containerd-8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84.scope - libcontainer container 8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84. Oct 8 19:30:52.128054 containerd[2022]: time="2024-10-08T19:30:52.127828008Z" level=info msg="StartContainer for \"a704e03879c961981e90a939f49031e3a4d0bb0f1dad41abce614c25bec838a9\" returns successfully" Oct 8 19:30:52.181533 containerd[2022]: time="2024-10-08T19:30:52.181464862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-jp5mt,Uid:e1533211-1ee3-46f5-bdc8-48ced861e8c2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84\"" Oct 8 19:30:52.187588 containerd[2022]: time="2024-10-08T19:30:52.187419345Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:30:52.850904 kubelet[3248]: I1008 19:30:52.850667 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27rw5" podStartSLOduration=1.850517898 podStartE2EDuration="1.850517898s" podCreationTimestamp="2024-10-08 19:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:30:52.850124485 +0000 UTC m=+14.458762390" watchObservedRunningTime="2024-10-08 19:30:52.850517898 +0000 UTC m=+14.459155815" Oct 8 19:30:53.513554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092832429.mount: Deactivated successfully. Oct 8 19:30:54.449635 containerd[2022]: time="2024-10-08T19:30:54.449303622Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:54.455699 containerd[2022]: time="2024-10-08T19:30:54.455105678Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485967" Oct 8 19:30:54.461452 containerd[2022]: time="2024-10-08T19:30:54.461349350Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:54.469896 containerd[2022]: time="2024-10-08T19:30:54.469795331Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:30:54.471640 containerd[2022]: time="2024-10-08T19:30:54.471385418Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.283547147s" Oct 8 19:30:54.471640 containerd[2022]: time="2024-10-08T19:30:54.471448653Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:30:54.476715 containerd[2022]: time="2024-10-08T19:30:54.476491305Z" level=info msg="CreateContainer within sandbox \"8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:30:54.497779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075813748.mount: Deactivated successfully. Oct 8 19:30:54.503329 containerd[2022]: time="2024-10-08T19:30:54.503266354Z" level=info msg="CreateContainer within sandbox \"8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf\"" Oct 8 19:30:54.504326 containerd[2022]: time="2024-10-08T19:30:54.504135793Z" level=info msg="StartContainer for \"1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf\"" Oct 8 19:30:54.560507 systemd[1]: Started cri-containerd-1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf.scope - libcontainer container 1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf. Oct 8 19:30:54.608688 containerd[2022]: time="2024-10-08T19:30:54.608559960Z" level=info msg="StartContainer for \"1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf\" returns successfully" Oct 8 19:30:54.852298 kubelet[3248]: I1008 19:30:54.851980 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-jp5mt" podStartSLOduration=1.564281901 podStartE2EDuration="3.851956723s" podCreationTimestamp="2024-10-08 19:30:51 +0000 UTC" firstStartedPulling="2024-10-08 19:30:52.185743224 +0000 UTC m=+13.794381105" lastFinishedPulling="2024-10-08 19:30:54.473418058 +0000 UTC m=+16.082055927" observedRunningTime="2024-10-08 19:30:54.851792673 +0000 UTC m=+16.460430566" watchObservedRunningTime="2024-10-08 19:30:54.851956723 +0000 UTC m=+16.460594604" Oct 8 19:30:59.876244 kubelet[3248]: I1008 19:30:59.875449 3248 topology_manager.go:215] "Topology Admit Handler" podUID="5f6bc95e-1c99-4678-9df7-b33950aec663" podNamespace="calico-system" podName="calico-typha-59785b6f77-6c6jg" Oct 8 19:30:59.893019 systemd[1]: Created slice kubepods-besteffort-pod5f6bc95e_1c99_4678_9df7_b33950aec663.slice - libcontainer container kubepods-besteffort-pod5f6bc95e_1c99_4678_9df7_b33950aec663.slice. Oct 8 19:30:59.907715 kubelet[3248]: I1008 19:30:59.906994 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f6bc95e-1c99-4678-9df7-b33950aec663-typha-certs\") pod \"calico-typha-59785b6f77-6c6jg\" (UID: \"5f6bc95e-1c99-4678-9df7-b33950aec663\") " pod="calico-system/calico-typha-59785b6f77-6c6jg" Oct 8 19:30:59.907715 kubelet[3248]: I1008 19:30:59.907085 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6bc95e-1c99-4678-9df7-b33950aec663-tigera-ca-bundle\") pod \"calico-typha-59785b6f77-6c6jg\" (UID: \"5f6bc95e-1c99-4678-9df7-b33950aec663\") " pod="calico-system/calico-typha-59785b6f77-6c6jg" Oct 8 19:30:59.907715 kubelet[3248]: I1008 19:30:59.907283 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5nc\" (UniqueName: \"kubernetes.io/projected/5f6bc95e-1c99-4678-9df7-b33950aec663-kube-api-access-gn5nc\") pod \"calico-typha-59785b6f77-6c6jg\" (UID: \"5f6bc95e-1c99-4678-9df7-b33950aec663\") " pod="calico-system/calico-typha-59785b6f77-6c6jg" Oct 8 19:31:00.026883 kubelet[3248]: I1008 19:31:00.026549 3248 topology_manager.go:215] "Topology Admit Handler" podUID="68685aa0-c8a4-4bef-9e5b-a10068a7df5d" podNamespace="calico-system" podName="calico-node-bv4rr" Oct 8 19:31:00.044852 systemd[1]: Created slice kubepods-besteffort-pod68685aa0_c8a4_4bef_9e5b_a10068a7df5d.slice - libcontainer container kubepods-besteffort-pod68685aa0_c8a4_4bef_9e5b_a10068a7df5d.slice. Oct 8 19:31:00.109620 kubelet[3248]: I1008 19:31:00.107882 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hjb\" (UniqueName: \"kubernetes.io/projected/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-kube-api-access-96hjb\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109620 kubelet[3248]: I1008 19:31:00.107946 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-var-run-calico\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109620 kubelet[3248]: I1008 19:31:00.107993 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-tigera-ca-bundle\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109620 kubelet[3248]: I1008 19:31:00.108031 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-policysync\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109620 kubelet[3248]: I1008 19:31:00.108066 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-var-lib-calico\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109996 kubelet[3248]: I1008 19:31:00.108100 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-xtables-lock\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109996 kubelet[3248]: I1008 19:31:00.108137 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-lib-modules\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109996 kubelet[3248]: I1008 19:31:00.108174 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-node-certs\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109996 kubelet[3248]: I1008 19:31:00.108236 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-cni-bin-dir\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.109996 kubelet[3248]: I1008 19:31:00.108276 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-flexvol-driver-host\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.110589 kubelet[3248]: I1008 19:31:00.108316 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-cni-log-dir\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.110589 kubelet[3248]: I1008 19:31:00.108357 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68685aa0-c8a4-4bef-9e5b-a10068a7df5d-cni-net-dir\") pod \"calico-node-bv4rr\" (UID: \"68685aa0-c8a4-4bef-9e5b-a10068a7df5d\") " pod="calico-system/calico-node-bv4rr" Oct 8 19:31:00.204244 containerd[2022]: time="2024-10-08T19:31:00.203544049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59785b6f77-6c6jg,Uid:5f6bc95e-1c99-4678-9df7-b33950aec663,Namespace:calico-system,Attempt:0,}" Oct 8 19:31:00.217403 kubelet[3248]: E1008 19:31:00.215501 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.217403 kubelet[3248]: W1008 19:31:00.216405 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.217403 kubelet[3248]: E1008 19:31:00.216474 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.226550 kubelet[3248]: E1008 19:31:00.223655 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.226550 kubelet[3248]: W1008 19:31:00.223701 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.226550 kubelet[3248]: E1008 19:31:00.223735 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.245458 kubelet[3248]: I1008 19:31:00.240131 3248 topology_manager.go:215] "Topology Admit Handler" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" podNamespace="calico-system" podName="csi-node-driver-zw82g" Oct 8 19:31:00.245458 kubelet[3248]: E1008 19:31:00.242819 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:00.283381 kubelet[3248]: E1008 19:31:00.283337 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.283601 kubelet[3248]: W1008 19:31:00.283572 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.283784 kubelet[3248]: E1008 19:31:00.283721 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.290791 kubelet[3248]: E1008 19:31:00.290520 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.290791 kubelet[3248]: W1008 19:31:00.290554 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.290791 kubelet[3248]: E1008 19:31:00.290587 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.293266 kubelet[3248]: E1008 19:31:00.293228 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.293585 kubelet[3248]: W1008 19:31:00.293447 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.293585 kubelet[3248]: E1008 19:31:00.293490 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.294707 kubelet[3248]: E1008 19:31:00.294553 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.294707 kubelet[3248]: W1008 19:31:00.294583 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.294707 kubelet[3248]: E1008 19:31:00.294614 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.295523 kubelet[3248]: E1008 19:31:00.295316 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.295523 kubelet[3248]: W1008 19:31:00.295347 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.295523 kubelet[3248]: E1008 19:31:00.295376 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.296552 kubelet[3248]: E1008 19:31:00.296299 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.296552 kubelet[3248]: W1008 19:31:00.296332 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.296552 kubelet[3248]: E1008 19:31:00.296363 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.297147 kubelet[3248]: E1008 19:31:00.296920 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.297147 kubelet[3248]: W1008 19:31:00.296942 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.297147 kubelet[3248]: E1008 19:31:00.296969 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.297812 kubelet[3248]: E1008 19:31:00.297736 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.298044 kubelet[3248]: W1008 19:31:00.297765 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.298044 kubelet[3248]: E1008 19:31:00.297957 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.298701 kubelet[3248]: E1008 19:31:00.298559 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.298701 kubelet[3248]: W1008 19:31:00.298584 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.298701 kubelet[3248]: E1008 19:31:00.298608 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.299365 kubelet[3248]: E1008 19:31:00.299185 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.299365 kubelet[3248]: W1008 19:31:00.299269 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.299365 kubelet[3248]: E1008 19:31:00.299293 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.300049 kubelet[3248]: E1008 19:31:00.299874 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.300049 kubelet[3248]: W1008 19:31:00.299897 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.300049 kubelet[3248]: E1008 19:31:00.299920 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.300740 kubelet[3248]: E1008 19:31:00.300558 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.300740 kubelet[3248]: W1008 19:31:00.300584 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.300740 kubelet[3248]: E1008 19:31:00.300609 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.307317 kubelet[3248]: E1008 19:31:00.306374 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.307317 kubelet[3248]: W1008 19:31:00.306412 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.307317 kubelet[3248]: E1008 19:31:00.306445 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.307317 kubelet[3248]: E1008 19:31:00.307041 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.307317 kubelet[3248]: W1008 19:31:00.307065 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.307317 kubelet[3248]: E1008 19:31:00.307093 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.308076 kubelet[3248]: E1008 19:31:00.307865 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.308076 kubelet[3248]: W1008 19:31:00.307892 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.308076 kubelet[3248]: E1008 19:31:00.307921 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.308952 kubelet[3248]: E1008 19:31:00.308471 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.308952 kubelet[3248]: W1008 19:31:00.308497 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.308952 kubelet[3248]: E1008 19:31:00.308526 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.309508 kubelet[3248]: E1008 19:31:00.309335 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.309508 kubelet[3248]: W1008 19:31:00.309368 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.309508 kubelet[3248]: E1008 19:31:00.309398 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.312539 kubelet[3248]: E1008 19:31:00.310616 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.312539 kubelet[3248]: W1008 19:31:00.310649 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.312539 kubelet[3248]: E1008 19:31:00.310679 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.313429 kubelet[3248]: E1008 19:31:00.313386 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.313824 kubelet[3248]: W1008 19:31:00.313563 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.313824 kubelet[3248]: E1008 19:31:00.313607 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.314966 kubelet[3248]: E1008 19:31:00.314930 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.317084 kubelet[3248]: W1008 19:31:00.316794 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.317084 kubelet[3248]: E1008 19:31:00.316858 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.318558 kubelet[3248]: E1008 19:31:00.318517 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.319082 kubelet[3248]: W1008 19:31:00.318738 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.319082 kubelet[3248]: E1008 19:31:00.318780 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.319911 kubelet[3248]: E1008 19:31:00.319878 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.320231 kubelet[3248]: W1008 19:31:00.320174 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.320385 kubelet[3248]: E1008 19:31:00.320360 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.321574 kubelet[3248]: I1008 19:31:00.321269 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/722ae81b-5bf7-40e0-b53d-6784c26bbee7-socket-dir\") pod \"csi-node-driver-zw82g\" (UID: \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\") " pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:00.322293 kubelet[3248]: E1008 19:31:00.321772 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.322293 kubelet[3248]: W1008 19:31:00.321795 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.322293 kubelet[3248]: E1008 19:31:00.321831 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.324954 kubelet[3248]: E1008 19:31:00.324709 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.324954 kubelet[3248]: W1008 19:31:00.324772 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.324954 kubelet[3248]: E1008 19:31:00.324898 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.332068 kubelet[3248]: E1008 19:31:00.330621 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.332068 kubelet[3248]: W1008 19:31:00.330655 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.332068 kubelet[3248]: E1008 19:31:00.330689 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.332068 kubelet[3248]: I1008 19:31:00.330738 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/722ae81b-5bf7-40e0-b53d-6784c26bbee7-registration-dir\") pod \"csi-node-driver-zw82g\" (UID: \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\") " pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:00.332068 kubelet[3248]: E1008 19:31:00.331160 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.332068 kubelet[3248]: W1008 19:31:00.331183 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.332068 kubelet[3248]: E1008 19:31:00.331241 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.332068 kubelet[3248]: I1008 19:31:00.331282 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/722ae81b-5bf7-40e0-b53d-6784c26bbee7-varrun\") pod \"csi-node-driver-zw82g\" (UID: \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\") " pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:00.333275 kubelet[3248]: E1008 19:31:00.333236 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.333779 kubelet[3248]: W1008 19:31:00.333740 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.333972 kubelet[3248]: E1008 19:31:00.333948 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.334309 kubelet[3248]: I1008 19:31:00.334265 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clt7b\" (UniqueName: \"kubernetes.io/projected/722ae81b-5bf7-40e0-b53d-6784c26bbee7-kube-api-access-clt7b\") pod \"csi-node-driver-zw82g\" (UID: \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\") " pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:00.335037 kubelet[3248]: E1008 19:31:00.335005 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.335268 kubelet[3248]: W1008 19:31:00.335238 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.336743 kubelet[3248]: E1008 19:31:00.336262 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.337468 kubelet[3248]: E1008 19:31:00.336937 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.337468 kubelet[3248]: W1008 19:31:00.336975 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.337468 kubelet[3248]: E1008 19:31:00.337013 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.337468 kubelet[3248]: E1008 19:31:00.337420 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.337468 kubelet[3248]: W1008 19:31:00.337439 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.338009 kubelet[3248]: E1008 19:31:00.337803 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.338162 kubelet[3248]: I1008 19:31:00.338106 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/722ae81b-5bf7-40e0-b53d-6784c26bbee7-kubelet-dir\") pod \"csi-node-driver-zw82g\" (UID: \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\") " pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:00.338530 kubelet[3248]: E1008 19:31:00.338505 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.338683 kubelet[3248]: W1008 19:31:00.338655 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.338955 kubelet[3248]: E1008 19:31:00.338790 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.339118 containerd[2022]: time="2024-10-08T19:31:00.338404122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:31:00.339118 containerd[2022]: time="2024-10-08T19:31:00.338526811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:00.339118 containerd[2022]: time="2024-10-08T19:31:00.338573154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:31:00.339118 containerd[2022]: time="2024-10-08T19:31:00.338607347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:00.339822 kubelet[3248]: E1008 19:31:00.339792 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.340067 kubelet[3248]: W1008 19:31:00.340027 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.340352 kubelet[3248]: E1008 19:31:00.340170 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.341332 kubelet[3248]: E1008 19:31:00.341298 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.341834 kubelet[3248]: W1008 19:31:00.341665 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.341834 kubelet[3248]: E1008 19:31:00.341716 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.343052 kubelet[3248]: E1008 19:31:00.342816 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.343052 kubelet[3248]: W1008 19:31:00.342848 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.343052 kubelet[3248]: E1008 19:31:00.342879 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.344020 kubelet[3248]: E1008 19:31:00.343662 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.344020 kubelet[3248]: W1008 19:31:00.343691 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.344020 kubelet[3248]: E1008 19:31:00.343718 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.345356 kubelet[3248]: E1008 19:31:00.344875 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.345356 kubelet[3248]: W1008 19:31:00.345015 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.345356 kubelet[3248]: E1008 19:31:00.345045 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.355231 containerd[2022]: time="2024-10-08T19:31:00.354732270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bv4rr,Uid:68685aa0-c8a4-4bef-9e5b-a10068a7df5d,Namespace:calico-system,Attempt:0,}" Oct 8 19:31:00.397182 systemd[1]: Started cri-containerd-f4e6b44b157786ed71cdb0cfc5a05f712f6cb2c33b8d924ed4f60fb1789fcec7.scope - libcontainer container f4e6b44b157786ed71cdb0cfc5a05f712f6cb2c33b8d924ed4f60fb1789fcec7. Oct 8 19:31:00.443067 containerd[2022]: time="2024-10-08T19:31:00.441808475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:31:00.443067 containerd[2022]: time="2024-10-08T19:31:00.441921475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:00.443067 containerd[2022]: time="2024-10-08T19:31:00.441961960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:31:00.443067 containerd[2022]: time="2024-10-08T19:31:00.441995973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:00.449683 kubelet[3248]: E1008 19:31:00.449280 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.449683 kubelet[3248]: W1008 19:31:00.449337 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.449683 kubelet[3248]: E1008 19:31:00.449374 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.453566 kubelet[3248]: E1008 19:31:00.451045 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.453566 kubelet[3248]: W1008 19:31:00.451086 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.453566 kubelet[3248]: E1008 19:31:00.451164 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.454004 kubelet[3248]: E1008 19:31:00.453971 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.456237 kubelet[3248]: W1008 19:31:00.454100 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.456237 kubelet[3248]: E1008 19:31:00.454152 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.459835 kubelet[3248]: E1008 19:31:00.459701 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.459835 kubelet[3248]: W1008 19:31:00.459736 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.460272 kubelet[3248]: E1008 19:31:00.460032 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.465442 kubelet[3248]: E1008 19:31:00.465361 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.465442 kubelet[3248]: W1008 19:31:00.465398 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.466136 kubelet[3248]: E1008 19:31:00.465740 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.467942 kubelet[3248]: E1008 19:31:00.467635 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.467942 kubelet[3248]: W1008 19:31:00.467670 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.467942 kubelet[3248]: E1008 19:31:00.467887 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.468680 kubelet[3248]: E1008 19:31:00.468647 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.468845 kubelet[3248]: W1008 19:31:00.468818 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.469424 kubelet[3248]: E1008 19:31:00.469082 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.471605 kubelet[3248]: E1008 19:31:00.471319 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.471605 kubelet[3248]: W1008 19:31:00.471353 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.472909 kubelet[3248]: E1008 19:31:00.472111 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.473160 kubelet[3248]: E1008 19:31:00.473133 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.473364 kubelet[3248]: W1008 19:31:00.473335 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.476945 kubelet[3248]: E1008 19:31:00.476507 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.476945 kubelet[3248]: W1008 19:31:00.476541 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.480603 kubelet[3248]: E1008 19:31:00.480051 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.481563 kubelet[3248]: E1008 19:31:00.481033 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.482077 kubelet[3248]: E1008 19:31:00.481117 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.482077 kubelet[3248]: W1008 19:31:00.481850 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.482722 kubelet[3248]: E1008 19:31:00.482508 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.484716 kubelet[3248]: E1008 19:31:00.483921 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.484716 kubelet[3248]: W1008 19:31:00.483955 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.485028 kubelet[3248]: E1008 19:31:00.484996 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.486291 kubelet[3248]: E1008 19:31:00.486024 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.486291 kubelet[3248]: W1008 19:31:00.486059 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.486763 kubelet[3248]: E1008 19:31:00.486728 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.487656 kubelet[3248]: E1008 19:31:00.487622 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.488056 kubelet[3248]: W1008 19:31:00.487854 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.488056 kubelet[3248]: E1008 19:31:00.487986 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.489146 kubelet[3248]: E1008 19:31:00.488937 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.489146 kubelet[3248]: W1008 19:31:00.488968 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.489573 kubelet[3248]: E1008 19:31:00.489433 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.491328 kubelet[3248]: E1008 19:31:00.489998 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.491328 kubelet[3248]: W1008 19:31:00.490030 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.491816 kubelet[3248]: E1008 19:31:00.491614 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.492817 kubelet[3248]: E1008 19:31:00.492628 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.492817 kubelet[3248]: W1008 19:31:00.492659 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.493566 kubelet[3248]: E1008 19:31:00.493116 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.493566 kubelet[3248]: E1008 19:31:00.493510 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.493566 kubelet[3248]: W1008 19:31:00.493531 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.494132 kubelet[3248]: E1008 19:31:00.493948 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.495589 kubelet[3248]: E1008 19:31:00.495157 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.495589 kubelet[3248]: W1008 19:31:00.495210 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.496529 kubelet[3248]: E1008 19:31:00.495972 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.497395 kubelet[3248]: E1008 19:31:00.497294 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.497395 kubelet[3248]: W1008 19:31:00.497330 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.497850 kubelet[3248]: E1008 19:31:00.497634 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.498377 kubelet[3248]: E1008 19:31:00.498349 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.499070 kubelet[3248]: W1008 19:31:00.498641 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.499419 kubelet[3248]: E1008 19:31:00.499348 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.501147 kubelet[3248]: E1008 19:31:00.500931 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.501147 kubelet[3248]: W1008 19:31:00.500967 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.501666 kubelet[3248]: E1008 19:31:00.501457 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.501903 kubelet[3248]: E1008 19:31:00.501845 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.501903 kubelet[3248]: W1008 19:31:00.501870 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.504342 kubelet[3248]: E1008 19:31:00.502163 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.505501 kubelet[3248]: E1008 19:31:00.504891 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.505501 kubelet[3248]: W1008 19:31:00.504923 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.505501 kubelet[3248]: E1008 19:31:00.504963 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.506057 kubelet[3248]: E1008 19:31:00.506028 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.506239 kubelet[3248]: W1008 19:31:00.506177 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.506375 kubelet[3248]: E1008 19:31:00.506350 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.535502 systemd[1]: Started cri-containerd-2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2.scope - libcontainer container 2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2. Oct 8 19:31:00.546163 kubelet[3248]: E1008 19:31:00.546115 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:00.546163 kubelet[3248]: W1008 19:31:00.546153 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:00.547283 kubelet[3248]: E1008 19:31:00.546229 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:00.622447 containerd[2022]: time="2024-10-08T19:31:00.622287543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59785b6f77-6c6jg,Uid:5f6bc95e-1c99-4678-9df7-b33950aec663,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4e6b44b157786ed71cdb0cfc5a05f712f6cb2c33b8d924ed4f60fb1789fcec7\"" Oct 8 19:31:00.632454 containerd[2022]: time="2024-10-08T19:31:00.632386678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:31:00.655643 containerd[2022]: time="2024-10-08T19:31:00.655574298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bv4rr,Uid:68685aa0-c8a4-4bef-9e5b-a10068a7df5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\"" Oct 8 19:31:01.608113 kubelet[3248]: E1008 19:31:01.608027 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:03.092452 containerd[2022]: time="2024-10-08T19:31:03.092373810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:03.095712 containerd[2022]: time="2024-10-08T19:31:03.095577887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:31:03.099286 containerd[2022]: time="2024-10-08T19:31:03.097285404Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:03.103234 containerd[2022]: time="2024-10-08T19:31:03.103147898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:03.106046 containerd[2022]: time="2024-10-08T19:31:03.105859994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.473405001s" Oct 8 19:31:03.106046 containerd[2022]: time="2024-10-08T19:31:03.105944924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:31:03.110959 containerd[2022]: time="2024-10-08T19:31:03.110781865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:31:03.128623 containerd[2022]: time="2024-10-08T19:31:03.128550985Z" level=info msg="CreateContainer within sandbox \"f4e6b44b157786ed71cdb0cfc5a05f712f6cb2c33b8d924ed4f60fb1789fcec7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:31:03.176907 containerd[2022]: time="2024-10-08T19:31:03.176786219Z" level=info msg="CreateContainer within sandbox \"f4e6b44b157786ed71cdb0cfc5a05f712f6cb2c33b8d924ed4f60fb1789fcec7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4055ffd340b2064d6b4d12ebc440b9166521d17269baf935264aa16d14af7626\"" Oct 8 19:31:03.179666 containerd[2022]: time="2024-10-08T19:31:03.178084008Z" level=info msg="StartContainer for \"4055ffd340b2064d6b4d12ebc440b9166521d17269baf935264aa16d14af7626\"" Oct 8 19:31:03.253977 systemd[1]: Started cri-containerd-4055ffd340b2064d6b4d12ebc440b9166521d17269baf935264aa16d14af7626.scope - libcontainer container 4055ffd340b2064d6b4d12ebc440b9166521d17269baf935264aa16d14af7626. Oct 8 19:31:03.378057 containerd[2022]: time="2024-10-08T19:31:03.377892472Z" level=info msg="StartContainer for \"4055ffd340b2064d6b4d12ebc440b9166521d17269baf935264aa16d14af7626\" returns successfully" Oct 8 19:31:03.607239 kubelet[3248]: E1008 19:31:03.607162 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:03.947997 kubelet[3248]: E1008 19:31:03.947931 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.947997 kubelet[3248]: W1008 19:31:03.947970 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.948564 kubelet[3248]: E1008 19:31:03.948005 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.948564 kubelet[3248]: E1008 19:31:03.948417 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.948564 kubelet[3248]: W1008 19:31:03.948440 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.948564 kubelet[3248]: E1008 19:31:03.948465 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.949396 kubelet[3248]: E1008 19:31:03.949351 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.949396 kubelet[3248]: W1008 19:31:03.949386 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.949605 kubelet[3248]: E1008 19:31:03.949442 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.950180 kubelet[3248]: E1008 19:31:03.950102 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.950180 kubelet[3248]: W1008 19:31:03.950134 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.950180 kubelet[3248]: E1008 19:31:03.950164 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.951149 kubelet[3248]: E1008 19:31:03.951086 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.952376 kubelet[3248]: W1008 19:31:03.952291 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.952578 kubelet[3248]: E1008 19:31:03.952361 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.952965 kubelet[3248]: E1008 19:31:03.952882 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.952965 kubelet[3248]: W1008 19:31:03.952907 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.952965 kubelet[3248]: E1008 19:31:03.952935 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.953700 kubelet[3248]: E1008 19:31:03.953546 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.953700 kubelet[3248]: W1008 19:31:03.953572 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.953700 kubelet[3248]: E1008 19:31:03.953649 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.954481 kubelet[3248]: E1008 19:31:03.954440 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.954481 kubelet[3248]: W1008 19:31:03.954474 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.954654 kubelet[3248]: E1008 19:31:03.954505 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.956457 kubelet[3248]: E1008 19:31:03.956407 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.956609 kubelet[3248]: W1008 19:31:03.956446 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.956609 kubelet[3248]: E1008 19:31:03.956536 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.956968 kubelet[3248]: E1008 19:31:03.956928 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.956968 kubelet[3248]: W1008 19:31:03.956956 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.957158 kubelet[3248]: E1008 19:31:03.956982 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.957566 kubelet[3248]: E1008 19:31:03.957530 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.957566 kubelet[3248]: W1008 19:31:03.957561 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.957744 kubelet[3248]: E1008 19:31:03.957589 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.958013 kubelet[3248]: E1008 19:31:03.957979 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.958013 kubelet[3248]: W1008 19:31:03.958006 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.958154 kubelet[3248]: E1008 19:31:03.958031 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.958530 kubelet[3248]: E1008 19:31:03.958497 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.958530 kubelet[3248]: W1008 19:31:03.958525 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.958697 kubelet[3248]: E1008 19:31:03.958555 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.959114 kubelet[3248]: E1008 19:31:03.959038 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.959114 kubelet[3248]: W1008 19:31:03.959058 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.959114 kubelet[3248]: E1008 19:31:03.959082 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:03.959787 kubelet[3248]: E1008 19:31:03.959717 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:03.959787 kubelet[3248]: W1008 19:31:03.959741 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:03.959787 kubelet[3248]: E1008 19:31:03.959768 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.012901 kubelet[3248]: E1008 19:31:04.012034 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.012901 kubelet[3248]: W1008 19:31:04.012073 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.012901 kubelet[3248]: E1008 19:31:04.012107 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.013288 kubelet[3248]: E1008 19:31:04.013011 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.013288 kubelet[3248]: W1008 19:31:04.013036 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.013288 kubelet[3248]: E1008 19:31:04.013068 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.015018 kubelet[3248]: E1008 19:31:04.013632 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.015018 kubelet[3248]: W1008 19:31:04.013666 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.015018 kubelet[3248]: E1008 19:31:04.013756 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.015018 kubelet[3248]: E1008 19:31:04.014540 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.015018 kubelet[3248]: W1008 19:31:04.014567 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.015018 kubelet[3248]: E1008 19:31:04.014605 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.015855 kubelet[3248]: E1008 19:31:04.015622 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.016345 kubelet[3248]: W1008 19:31:04.016302 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.016453 kubelet[3248]: E1008 19:31:04.016359 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.016859 kubelet[3248]: E1008 19:31:04.016828 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.016951 kubelet[3248]: W1008 19:31:04.016857 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.017121 kubelet[3248]: E1008 19:31:04.017042 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.018825 kubelet[3248]: E1008 19:31:04.018661 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.018825 kubelet[3248]: W1008 19:31:04.018700 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.018825 kubelet[3248]: E1008 19:31:04.018766 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.019102 kubelet[3248]: E1008 19:31:04.019079 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.019102 kubelet[3248]: W1008 19:31:04.019095 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.020056 kubelet[3248]: E1008 19:31:04.019508 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.020592 kubelet[3248]: E1008 19:31:04.020269 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.020592 kubelet[3248]: W1008 19:31:04.020295 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.020867 kubelet[3248]: E1008 19:31:04.020702 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.021742 kubelet[3248]: E1008 19:31:04.021700 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.021872 kubelet[3248]: W1008 19:31:04.021767 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.022115 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.023729 kubelet[3248]: W1008 19:31:04.022134 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.022461 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.023729 kubelet[3248]: W1008 19:31:04.022478 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.022522 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.022570 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.022857 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.023046 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.023729 kubelet[3248]: W1008 19:31:04.023146 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.023729 kubelet[3248]: E1008 19:31:04.023265 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.024368 kubelet[3248]: E1008 19:31:04.023865 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.024368 kubelet[3248]: W1008 19:31:04.023890 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.024368 kubelet[3248]: E1008 19:31:04.023939 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.024541 kubelet[3248]: E1008 19:31:04.024494 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.024541 kubelet[3248]: W1008 19:31:04.024514 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.025665 kubelet[3248]: E1008 19:31:04.024669 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.025665 kubelet[3248]: E1008 19:31:04.025577 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.025665 kubelet[3248]: W1008 19:31:04.025606 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.025665 kubelet[3248]: E1008 19:31:04.025637 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.028381 kubelet[3248]: E1008 19:31:04.028320 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.028381 kubelet[3248]: W1008 19:31:04.028360 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.028610 kubelet[3248]: E1008 19:31:04.028404 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.029275 kubelet[3248]: E1008 19:31:04.028828 3248 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:31:04.029275 kubelet[3248]: W1008 19:31:04.028860 3248 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:31:04.029275 kubelet[3248]: E1008 19:31:04.028888 3248 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:31:04.503689 containerd[2022]: time="2024-10-08T19:31:04.503600276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:04.505767 containerd[2022]: time="2024-10-08T19:31:04.505641116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:31:04.507268 containerd[2022]: time="2024-10-08T19:31:04.507174799Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:04.512492 containerd[2022]: time="2024-10-08T19:31:04.512339924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:04.516352 containerd[2022]: time="2024-10-08T19:31:04.516276691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.405427076s" Oct 8 19:31:04.516352 containerd[2022]: time="2024-10-08T19:31:04.516347058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:31:04.521936 containerd[2022]: time="2024-10-08T19:31:04.521877898Z" level=info msg="CreateContainer within sandbox \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:31:04.556122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694526998.mount: Deactivated successfully. Oct 8 19:31:04.567905 containerd[2022]: time="2024-10-08T19:31:04.567087080Z" level=info msg="CreateContainer within sandbox \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015\"" Oct 8 19:31:04.570327 containerd[2022]: time="2024-10-08T19:31:04.570003133Z" level=info msg="StartContainer for \"4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015\"" Oct 8 19:31:04.665797 systemd[1]: Started cri-containerd-4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015.scope - libcontainer container 4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015. Oct 8 19:31:04.752233 containerd[2022]: time="2024-10-08T19:31:04.751127283Z" level=info msg="StartContainer for \"4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015\" returns successfully" Oct 8 19:31:04.793166 systemd[1]: cri-containerd-4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015.scope: Deactivated successfully. Oct 8 19:31:04.854718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015-rootfs.mount: Deactivated successfully. Oct 8 19:31:04.883825 kubelet[3248]: I1008 19:31:04.881832 3248 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:31:04.916978 kubelet[3248]: I1008 19:31:04.916448 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59785b6f77-6c6jg" podStartSLOduration=3.4374256389999998 podStartE2EDuration="5.916424944s" podCreationTimestamp="2024-10-08 19:30:59 +0000 UTC" firstStartedPulling="2024-10-08 19:31:00.630614304 +0000 UTC m=+22.239252185" lastFinishedPulling="2024-10-08 19:31:03.109613621 +0000 UTC m=+24.718251490" observedRunningTime="2024-10-08 19:31:03.90316039 +0000 UTC m=+25.511798295" watchObservedRunningTime="2024-10-08 19:31:04.916424944 +0000 UTC m=+26.525062849" Oct 8 19:31:05.467481 containerd[2022]: time="2024-10-08T19:31:05.467394287Z" level=info msg="shim disconnected" id=4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015 namespace=k8s.io Oct 8 19:31:05.467481 containerd[2022]: time="2024-10-08T19:31:05.467475015Z" level=warning msg="cleaning up after shim disconnected" id=4c29ef8e687f700e57967289f5bcd87e45bfa1e585eeedecbe31931fb5e78015 namespace=k8s.io Oct 8 19:31:05.467886 containerd[2022]: time="2024-10-08T19:31:05.467498979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:31:05.607493 kubelet[3248]: E1008 19:31:05.607340 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:05.891036 containerd[2022]: time="2024-10-08T19:31:05.890123617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:31:07.607782 kubelet[3248]: E1008 19:31:07.607718 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:09.607823 kubelet[3248]: E1008 19:31:09.607766 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:09.666573 containerd[2022]: time="2024-10-08T19:31:09.665065075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:09.668248 containerd[2022]: time="2024-10-08T19:31:09.668169682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:31:09.670158 containerd[2022]: time="2024-10-08T19:31:09.670105074Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:09.677420 containerd[2022]: time="2024-10-08T19:31:09.677337824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:09.687801 containerd[2022]: time="2024-10-08T19:31:09.686934652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.796702549s" Oct 8 19:31:09.687801 containerd[2022]: time="2024-10-08T19:31:09.687007168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:31:09.695027 containerd[2022]: time="2024-10-08T19:31:09.694951695Z" level=info msg="CreateContainer within sandbox \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:31:09.718237 containerd[2022]: time="2024-10-08T19:31:09.718124416Z" level=info msg="CreateContainer within sandbox \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef\"" Oct 8 19:31:09.719710 containerd[2022]: time="2024-10-08T19:31:09.719435855Z" level=info msg="StartContainer for \"f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef\"" Oct 8 19:31:09.780499 systemd[1]: Started cri-containerd-f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef.scope - libcontainer container f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef. Oct 8 19:31:09.832360 containerd[2022]: time="2024-10-08T19:31:09.832286157Z" level=info msg="StartContainer for \"f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef\" returns successfully" Oct 8 19:31:11.118353 systemd[1]: cri-containerd-f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef.scope: Deactivated successfully. Oct 8 19:31:11.154091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef-rootfs.mount: Deactivated successfully. Oct 8 19:31:11.186966 kubelet[3248]: I1008 19:31:11.186924 3248 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:31:11.227289 kubelet[3248]: I1008 19:31:11.227220 3248 topology_manager.go:215] "Topology Admit Handler" podUID="3d74a8c9-7cfd-4102-a721-6998bf392d12" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cgpbr" Oct 8 19:31:11.234091 kubelet[3248]: I1008 19:31:11.233983 3248 topology_manager.go:215] "Topology Admit Handler" podUID="1cb798ed-2ab0-4a6d-84b4-808d1cf3653e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mlxwm" Oct 8 19:31:11.242234 kubelet[3248]: I1008 19:31:11.239991 3248 topology_manager.go:215] "Topology Admit Handler" podUID="8a2d3aef-c6d0-496a-87a4-07be591e7fdd" podNamespace="calico-system" podName="calico-kube-controllers-6fb464564c-vrn29" Oct 8 19:31:11.276425 systemd[1]: Created slice kubepods-burstable-pod3d74a8c9_7cfd_4102_a721_6998bf392d12.slice - libcontainer container kubepods-burstable-pod3d74a8c9_7cfd_4102_a721_6998bf392d12.slice. Oct 8 19:31:11.299599 systemd[1]: Created slice kubepods-burstable-pod1cb798ed_2ab0_4a6d_84b4_808d1cf3653e.slice - libcontainer container kubepods-burstable-pod1cb798ed_2ab0_4a6d_84b4_808d1cf3653e.slice. Oct 8 19:31:11.318246 systemd[1]: Created slice kubepods-besteffort-pod8a2d3aef_c6d0_496a_87a4_07be591e7fdd.slice - libcontainer container kubepods-besteffort-pod8a2d3aef_c6d0_496a_87a4_07be591e7fdd.slice. Oct 8 19:31:11.367438 kubelet[3248]: I1008 19:31:11.367358 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d74a8c9-7cfd-4102-a721-6998bf392d12-config-volume\") pod \"coredns-7db6d8ff4d-cgpbr\" (UID: \"3d74a8c9-7cfd-4102-a721-6998bf392d12\") " pod="kube-system/coredns-7db6d8ff4d-cgpbr" Oct 8 19:31:11.367438 kubelet[3248]: I1008 19:31:11.367440 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2d3aef-c6d0-496a-87a4-07be591e7fdd-tigera-ca-bundle\") pod \"calico-kube-controllers-6fb464564c-vrn29\" (UID: \"8a2d3aef-c6d0-496a-87a4-07be591e7fdd\") " pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" Oct 8 19:31:11.367741 kubelet[3248]: I1008 19:31:11.367493 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xkwh\" (UniqueName: \"kubernetes.io/projected/3d74a8c9-7cfd-4102-a721-6998bf392d12-kube-api-access-9xkwh\") pod \"coredns-7db6d8ff4d-cgpbr\" (UID: \"3d74a8c9-7cfd-4102-a721-6998bf392d12\") " pod="kube-system/coredns-7db6d8ff4d-cgpbr" Oct 8 19:31:11.367741 kubelet[3248]: I1008 19:31:11.367541 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whkp8\" (UniqueName: \"kubernetes.io/projected/1cb798ed-2ab0-4a6d-84b4-808d1cf3653e-kube-api-access-whkp8\") pod \"coredns-7db6d8ff4d-mlxwm\" (UID: \"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e\") " pod="kube-system/coredns-7db6d8ff4d-mlxwm" Oct 8 19:31:11.367741 kubelet[3248]: I1008 19:31:11.367581 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdf2t\" (UniqueName: \"kubernetes.io/projected/8a2d3aef-c6d0-496a-87a4-07be591e7fdd-kube-api-access-xdf2t\") pod \"calico-kube-controllers-6fb464564c-vrn29\" (UID: \"8a2d3aef-c6d0-496a-87a4-07be591e7fdd\") " pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" Oct 8 19:31:11.367741 kubelet[3248]: I1008 19:31:11.367622 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb798ed-2ab0-4a6d-84b4-808d1cf3653e-config-volume\") pod \"coredns-7db6d8ff4d-mlxwm\" (UID: \"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e\") " pod="kube-system/coredns-7db6d8ff4d-mlxwm" Oct 8 19:31:11.587665 containerd[2022]: time="2024-10-08T19:31:11.587593240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cgpbr,Uid:3d74a8c9-7cfd-4102-a721-6998bf392d12,Namespace:kube-system,Attempt:0,}" Oct 8 19:31:11.599169 containerd[2022]: time="2024-10-08T19:31:11.598997499Z" level=info msg="shim disconnected" id=f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef namespace=k8s.io Oct 8 19:31:11.599169 containerd[2022]: time="2024-10-08T19:31:11.599070268Z" level=warning msg="cleaning up after shim disconnected" id=f5580c80733c8fa289834f715d9b3f605b697a01c71469653ae1559aad2b67ef namespace=k8s.io Oct 8 19:31:11.599169 containerd[2022]: time="2024-10-08T19:31:11.599091038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:31:11.612074 containerd[2022]: time="2024-10-08T19:31:11.611143477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlxwm,Uid:1cb798ed-2ab0-4a6d-84b4-808d1cf3653e,Namespace:kube-system,Attempt:0,}" Oct 8 19:31:11.631690 containerd[2022]: time="2024-10-08T19:31:11.631358844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb464564c-vrn29,Uid:8a2d3aef-c6d0-496a-87a4-07be591e7fdd,Namespace:calico-system,Attempt:0,}" Oct 8 19:31:11.641437 systemd[1]: Created slice kubepods-besteffort-pod722ae81b_5bf7_40e0_b53d_6784c26bbee7.slice - libcontainer container kubepods-besteffort-pod722ae81b_5bf7_40e0_b53d_6784c26bbee7.slice. Oct 8 19:31:11.655950 containerd[2022]: time="2024-10-08T19:31:11.655605573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw82g,Uid:722ae81b-5bf7-40e0-b53d-6784c26bbee7,Namespace:calico-system,Attempt:0,}" Oct 8 19:31:11.865039 containerd[2022]: time="2024-10-08T19:31:11.864673821Z" level=error msg="Failed to destroy network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.866416 containerd[2022]: time="2024-10-08T19:31:11.865742115Z" level=error msg="encountered an error cleaning up failed sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.866416 containerd[2022]: time="2024-10-08T19:31:11.865903727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cgpbr,Uid:3d74a8c9-7cfd-4102-a721-6998bf392d12,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.866671 kubelet[3248]: E1008 19:31:11.866530 3248 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.866763 kubelet[3248]: E1008 19:31:11.866671 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cgpbr" Oct 8 19:31:11.866827 kubelet[3248]: E1008 19:31:11.866755 3248 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cgpbr" Oct 8 19:31:11.868923 kubelet[3248]: E1008 19:31:11.867217 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cgpbr_kube-system(3d74a8c9-7cfd-4102-a721-6998bf392d12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cgpbr_kube-system(3d74a8c9-7cfd-4102-a721-6998bf392d12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cgpbr" podUID="3d74a8c9-7cfd-4102-a721-6998bf392d12" Oct 8 19:31:11.898241 containerd[2022]: time="2024-10-08T19:31:11.897160304Z" level=error msg="Failed to destroy network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.899077 containerd[2022]: time="2024-10-08T19:31:11.898883693Z" level=error msg="encountered an error cleaning up failed sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.899417 containerd[2022]: time="2024-10-08T19:31:11.899251137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb464564c-vrn29,Uid:8a2d3aef-c6d0-496a-87a4-07be591e7fdd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.899738 kubelet[3248]: E1008 19:31:11.899678 3248 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.899829 kubelet[3248]: E1008 19:31:11.899763 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" Oct 8 19:31:11.899829 kubelet[3248]: E1008 19:31:11.899798 3248 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" Oct 8 19:31:11.899964 kubelet[3248]: E1008 19:31:11.899860 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb464564c-vrn29_calico-system(8a2d3aef-c6d0-496a-87a4-07be591e7fdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb464564c-vrn29_calico-system(8a2d3aef-c6d0-496a-87a4-07be591e7fdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" podUID="8a2d3aef-c6d0-496a-87a4-07be591e7fdd" Oct 8 19:31:11.903284 containerd[2022]: time="2024-10-08T19:31:11.902951110Z" level=error msg="Failed to destroy network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.904420 containerd[2022]: time="2024-10-08T19:31:11.904087718Z" level=error msg="encountered an error cleaning up failed sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.904420 containerd[2022]: time="2024-10-08T19:31:11.904184883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlxwm,Uid:1cb798ed-2ab0-4a6d-84b4-808d1cf3653e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.904759 kubelet[3248]: E1008 19:31:11.904646 3248 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.904855 kubelet[3248]: E1008 19:31:11.904782 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mlxwm" Oct 8 19:31:11.904855 kubelet[3248]: E1008 19:31:11.904823 3248 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mlxwm" Oct 8 19:31:11.904969 kubelet[3248]: E1008 19:31:11.904889 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mlxwm_kube-system(1cb798ed-2ab0-4a6d-84b4-808d1cf3653e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mlxwm_kube-system(1cb798ed-2ab0-4a6d-84b4-808d1cf3653e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mlxwm" podUID="1cb798ed-2ab0-4a6d-84b4-808d1cf3653e" Oct 8 19:31:11.918273 kubelet[3248]: I1008 19:31:11.917440 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:11.919057 containerd[2022]: time="2024-10-08T19:31:11.918883721Z" level=info msg="StopPodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\"" Oct 8 19:31:11.919994 containerd[2022]: time="2024-10-08T19:31:11.919925121Z" level=info msg="Ensure that sandbox 19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728 in task-service has been cleanup successfully" Oct 8 19:31:11.922336 kubelet[3248]: I1008 19:31:11.922264 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:11.926728 containerd[2022]: time="2024-10-08T19:31:11.926673778Z" level=info msg="StopPodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\"" Oct 8 19:31:11.927760 containerd[2022]: time="2024-10-08T19:31:11.927540539Z" level=info msg="Ensure that sandbox 5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a in task-service has been cleanup successfully" Oct 8 19:31:11.929344 kubelet[3248]: I1008 19:31:11.929281 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:11.930868 containerd[2022]: time="2024-10-08T19:31:11.930427238Z" level=info msg="StopPodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\"" Oct 8 19:31:11.935588 containerd[2022]: time="2024-10-08T19:31:11.934222202Z" level=info msg="Ensure that sandbox 82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127 in task-service has been cleanup successfully" Oct 8 19:31:11.940797 containerd[2022]: time="2024-10-08T19:31:11.940725768Z" level=error msg="Failed to destroy network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.942327 containerd[2022]: time="2024-10-08T19:31:11.942258358Z" level=error msg="encountered an error cleaning up failed sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.942474 containerd[2022]: time="2024-10-08T19:31:11.942348439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw82g,Uid:722ae81b-5bf7-40e0-b53d-6784c26bbee7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.942715 kubelet[3248]: E1008 19:31:11.942657 3248 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:11.942805 kubelet[3248]: E1008 19:31:11.942744 3248 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:11.942805 kubelet[3248]: E1008 19:31:11.942783 3248 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zw82g" Oct 8 19:31:11.942925 kubelet[3248]: E1008 19:31:11.942841 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zw82g_calico-system(722ae81b-5bf7-40e0-b53d-6784c26bbee7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zw82g_calico-system(722ae81b-5bf7-40e0-b53d-6784c26bbee7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:11.954987 containerd[2022]: time="2024-10-08T19:31:11.954565637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:31:12.040459 containerd[2022]: time="2024-10-08T19:31:12.040352409Z" level=error msg="StopPodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" failed" error="failed to destroy network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:12.040855 kubelet[3248]: E1008 19:31:12.040793 3248 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:12.041262 kubelet[3248]: E1008 19:31:12.040891 3248 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728"} Oct 8 19:31:12.041262 kubelet[3248]: E1008 19:31:12.041036 3248 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a2d3aef-c6d0-496a-87a4-07be591e7fdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:31:12.041262 kubelet[3248]: E1008 19:31:12.041081 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a2d3aef-c6d0-496a-87a4-07be591e7fdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" podUID="8a2d3aef-c6d0-496a-87a4-07be591e7fdd" Oct 8 19:31:12.047441 containerd[2022]: time="2024-10-08T19:31:12.047368872Z" level=error msg="StopPodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" failed" error="failed to destroy network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:12.048143 kubelet[3248]: E1008 19:31:12.047871 3248 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:12.048143 kubelet[3248]: E1008 19:31:12.047957 3248 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a"} Oct 8 19:31:12.048143 kubelet[3248]: E1008 19:31:12.048019 3248 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:31:12.048143 kubelet[3248]: E1008 19:31:12.048057 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mlxwm" podUID="1cb798ed-2ab0-4a6d-84b4-808d1cf3653e" Oct 8 19:31:12.058970 containerd[2022]: time="2024-10-08T19:31:12.058902412Z" level=error msg="StopPodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" failed" error="failed to destroy network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:12.059694 kubelet[3248]: E1008 19:31:12.059217 3248 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:12.059694 kubelet[3248]: E1008 19:31:12.059543 3248 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127"} Oct 8 19:31:12.059694 kubelet[3248]: E1008 19:31:12.059623 3248 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d74a8c9-7cfd-4102-a721-6998bf392d12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:31:12.060013 kubelet[3248]: E1008 19:31:12.059674 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d74a8c9-7cfd-4102-a721-6998bf392d12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cgpbr" podUID="3d74a8c9-7cfd-4102-a721-6998bf392d12" Oct 8 19:31:12.156420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127-shm.mount: Deactivated successfully. Oct 8 19:31:12.948507 kubelet[3248]: I1008 19:31:12.948457 3248 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:12.949558 containerd[2022]: time="2024-10-08T19:31:12.949509911Z" level=info msg="StopPodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\"" Oct 8 19:31:12.951217 containerd[2022]: time="2024-10-08T19:31:12.950683414Z" level=info msg="Ensure that sandbox d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e in task-service has been cleanup successfully" Oct 8 19:31:12.999158 containerd[2022]: time="2024-10-08T19:31:12.998876170Z" level=error msg="StopPodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" failed" error="failed to destroy network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:31:13.000926 kubelet[3248]: E1008 19:31:13.000383 3248 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:13.000926 kubelet[3248]: E1008 19:31:13.000683 3248 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e"} Oct 8 19:31:13.000926 kubelet[3248]: E1008 19:31:13.000771 3248 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:31:13.000926 kubelet[3248]: E1008 19:31:13.000812 3248 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"722ae81b-5bf7-40e0-b53d-6784c26bbee7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zw82g" podUID="722ae81b-5bf7-40e0-b53d-6784c26bbee7" Oct 8 19:31:17.435729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436416495.mount: Deactivated successfully. Oct 8 19:31:17.531633 containerd[2022]: time="2024-10-08T19:31:17.531548458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:17.536939 containerd[2022]: time="2024-10-08T19:31:17.536873863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:31:17.544637 containerd[2022]: time="2024-10-08T19:31:17.544548590Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:17.551844 containerd[2022]: time="2024-10-08T19:31:17.551731516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:17.553584 containerd[2022]: time="2024-10-08T19:31:17.553170843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 5.59854388s" Oct 8 19:31:17.553584 containerd[2022]: time="2024-10-08T19:31:17.553276124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:31:17.582545 containerd[2022]: time="2024-10-08T19:31:17.582239795Z" level=info msg="CreateContainer within sandbox \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:31:17.610569 containerd[2022]: time="2024-10-08T19:31:17.610510203Z" level=info msg="CreateContainer within sandbox \"2d38b5c89010037391a10d5ff44a5c4d569750a9519c22718605c60b4b2181c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0d02e63053f797713c71fdb8380579f4b0d902756ef43d54c1395e561f4224c5\"" Oct 8 19:31:17.612938 containerd[2022]: time="2024-10-08T19:31:17.611733362Z" level=info msg="StartContainer for \"0d02e63053f797713c71fdb8380579f4b0d902756ef43d54c1395e561f4224c5\"" Oct 8 19:31:17.668524 systemd[1]: Started cri-containerd-0d02e63053f797713c71fdb8380579f4b0d902756ef43d54c1395e561f4224c5.scope - libcontainer container 0d02e63053f797713c71fdb8380579f4b0d902756ef43d54c1395e561f4224c5. Oct 8 19:31:17.729447 containerd[2022]: time="2024-10-08T19:31:17.729281996Z" level=info msg="StartContainer for \"0d02e63053f797713c71fdb8380579f4b0d902756ef43d54c1395e561f4224c5\" returns successfully" Oct 8 19:31:17.853997 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:31:17.854183 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:31:22.142697 systemd[1]: Started sshd@7-172.31.26.181:22-139.178.68.195:59808.service - OpenSSH per-connection server daemon (139.178.68.195:59808). Oct 8 19:31:22.342260 sshd[4625]: Accepted publickey for core from 139.178.68.195 port 59808 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:22.345279 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:22.352941 systemd-logind[1990]: New session 8 of user core. Oct 8 19:31:22.366457 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:31:22.620445 sshd[4625]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:22.627433 systemd[1]: sshd@7-172.31.26.181:22-139.178.68.195:59808.service: Deactivated successfully. Oct 8 19:31:22.632842 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:31:22.634065 systemd-logind[1990]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:31:22.635961 systemd-logind[1990]: Removed session 8. Oct 8 19:31:23.609820 containerd[2022]: time="2024-10-08T19:31:23.608398318Z" level=info msg="StopPodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\"" Oct 8 19:31:23.697784 kubelet[3248]: I1008 19:31:23.696174 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bv4rr" podStartSLOduration=6.798509589 podStartE2EDuration="23.69614794s" podCreationTimestamp="2024-10-08 19:31:00 +0000 UTC" firstStartedPulling="2024-10-08 19:31:00.657534445 +0000 UTC m=+22.266172314" lastFinishedPulling="2024-10-08 19:31:17.555172796 +0000 UTC m=+39.163810665" observedRunningTime="2024-10-08 19:31:18.010654914 +0000 UTC m=+39.619292807" watchObservedRunningTime="2024-10-08 19:31:23.69614794 +0000 UTC m=+45.304785833" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.699 [INFO][4680] k8s.go 608: Cleaning up netns ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.699 [INFO][4680] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" iface="eth0" netns="/var/run/netns/cni-b3eac807-b0de-fa60-6950-8b7f9d1d3e1e" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.699 [INFO][4680] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" iface="eth0" netns="/var/run/netns/cni-b3eac807-b0de-fa60-6950-8b7f9d1d3e1e" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.700 [INFO][4680] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" iface="eth0" netns="/var/run/netns/cni-b3eac807-b0de-fa60-6950-8b7f9d1d3e1e" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.700 [INFO][4680] k8s.go 615: Releasing IP address(es) ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.700 [INFO][4680] utils.go 188: Calico CNI releasing IP address ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.742 [INFO][4686] ipam_plugin.go 417: Releasing address using handleID ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.742 [INFO][4686] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.742 [INFO][4686] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.754 [WARNING][4686] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.754 [INFO][4686] ipam_plugin.go 445: Releasing address using workloadID ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.756 [INFO][4686] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:23.763889 containerd[2022]: 2024-10-08 19:31:23.761 [INFO][4680] k8s.go 621: Teardown processing complete. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:23.767426 containerd[2022]: time="2024-10-08T19:31:23.767357447Z" level=info msg="TearDown network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" successfully" Oct 8 19:31:23.767426 containerd[2022]: time="2024-10-08T19:31:23.767414620Z" level=info msg="StopPodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" returns successfully" Oct 8 19:31:23.768548 containerd[2022]: time="2024-10-08T19:31:23.768488665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cgpbr,Uid:3d74a8c9-7cfd-4102-a721-6998bf392d12,Namespace:kube-system,Attempt:1,}" Oct 8 19:31:23.770099 systemd[1]: run-netns-cni\x2db3eac807\x2db0de\x2dfa60\x2d6950\x2d8b7f9d1d3e1e.mount: Deactivated successfully. Oct 8 19:31:23.993744 systemd-networkd[1845]: cali3984cd44739: Link UP Oct 8 19:31:23.995522 (udev-worker)[4714]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:31:23.996018 systemd-networkd[1845]: cali3984cd44739: Gained carrier Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.832 [INFO][4694] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.851 [INFO][4694] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0 coredns-7db6d8ff4d- kube-system 3d74a8c9-7cfd-4102-a721-6998bf392d12 750 0 2024-10-08 19:30:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-181 coredns-7db6d8ff4d-cgpbr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3984cd44739 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.852 [INFO][4694] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.903 [INFO][4705] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" HandleID="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.923 [INFO][4705] ipam_plugin.go 270: Auto assigning IP ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" HandleID="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289e20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-181", "pod":"coredns-7db6d8ff4d-cgpbr", "timestamp":"2024-10-08 19:31:23.903363157 +0000 UTC"}, Hostname:"ip-172-31-26-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.923 [INFO][4705] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.923 [INFO][4705] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.923 [INFO][4705] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-181' Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.926 [INFO][4705] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.934 [INFO][4705] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.945 [INFO][4705] ipam.go 489: Trying affinity for 192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.948 [INFO][4705] ipam.go 155: Attempting to load block cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.952 [INFO][4705] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.952 [INFO][4705] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.955 [INFO][4705] ipam.go 1685: Creating new handle: k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.962 [INFO][4705] ipam.go 1203: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.973 [INFO][4705] ipam.go 1216: Successfully claimed IPs: [192.168.84.1/26] block=192.168.84.0/26 handle="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.973 [INFO][4705] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.84.1/26] handle="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" host="ip-172-31-26-181" Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.973 [INFO][4705] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:24.022283 containerd[2022]: 2024-10-08 19:31:23.973 [INFO][4705] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.84.1/26] IPv6=[] ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" HandleID="k8s-pod-network.97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.023529 containerd[2022]: 2024-10-08 19:31:23.977 [INFO][4694] k8s.go 386: Populated endpoint ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d74a8c9-7cfd-4102-a721-6998bf392d12", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"", Pod:"coredns-7db6d8ff4d-cgpbr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3984cd44739", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:24.023529 containerd[2022]: 2024-10-08 19:31:23.977 [INFO][4694] k8s.go 387: Calico CNI using IPs: [192.168.84.1/32] ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.023529 containerd[2022]: 2024-10-08 19:31:23.977 [INFO][4694] dataplane_linux.go 68: Setting the host side veth name to cali3984cd44739 ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.023529 containerd[2022]: 2024-10-08 19:31:23.993 [INFO][4694] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.023529 containerd[2022]: 2024-10-08 19:31:23.994 [INFO][4694] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d74a8c9-7cfd-4102-a721-6998bf392d12", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced", Pod:"coredns-7db6d8ff4d-cgpbr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3984cd44739", MAC:"62:53:05:10:b8:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:24.023529 containerd[2022]: 2024-10-08 19:31:24.015 [INFO][4694] k8s.go 500: Wrote updated endpoint to datastore ContainerID="97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cgpbr" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:24.058956 containerd[2022]: time="2024-10-08T19:31:24.058317529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:31:24.058956 containerd[2022]: time="2024-10-08T19:31:24.058423181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:24.058956 containerd[2022]: time="2024-10-08T19:31:24.058465274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:31:24.058956 containerd[2022]: time="2024-10-08T19:31:24.058490115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:24.111530 systemd[1]: Started cri-containerd-97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced.scope - libcontainer container 97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced. Oct 8 19:31:24.174560 containerd[2022]: time="2024-10-08T19:31:24.174487177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cgpbr,Uid:3d74a8c9-7cfd-4102-a721-6998bf392d12,Namespace:kube-system,Attempt:1,} returns sandbox id \"97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced\"" Oct 8 19:31:24.181714 containerd[2022]: time="2024-10-08T19:31:24.181650809Z" level=info msg="CreateContainer within sandbox \"97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:31:24.209650 containerd[2022]: time="2024-10-08T19:31:24.209590296Z" level=info msg="CreateContainer within sandbox \"97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d0f13fd2d82190ade4203d83382566a91cdd8347cdb7c4374bc4dbb7f6078ff\"" Oct 8 19:31:24.211091 containerd[2022]: time="2024-10-08T19:31:24.211047597Z" level=info msg="StartContainer for \"9d0f13fd2d82190ade4203d83382566a91cdd8347cdb7c4374bc4dbb7f6078ff\"" Oct 8 19:31:24.254520 systemd[1]: Started cri-containerd-9d0f13fd2d82190ade4203d83382566a91cdd8347cdb7c4374bc4dbb7f6078ff.scope - libcontainer container 9d0f13fd2d82190ade4203d83382566a91cdd8347cdb7c4374bc4dbb7f6078ff. Oct 8 19:31:24.313465 containerd[2022]: time="2024-10-08T19:31:24.313004914Z" level=info msg="StartContainer for \"9d0f13fd2d82190ade4203d83382566a91cdd8347cdb7c4374bc4dbb7f6078ff\" returns successfully" Oct 8 19:31:24.769991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998176850.mount: Deactivated successfully. Oct 8 19:31:24.952155 kubelet[3248]: I1008 19:31:24.952091 3248 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:31:25.050315 kubelet[3248]: I1008 19:31:25.047706 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cgpbr" podStartSLOduration=34.047678197 podStartE2EDuration="34.047678197s" podCreationTimestamp="2024-10-08 19:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:31:25.017844619 +0000 UTC m=+46.626482536" watchObservedRunningTime="2024-10-08 19:31:25.047678197 +0000 UTC m=+46.656316102" Oct 8 19:31:25.412348 kernel: bpftool[4846]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:31:25.611729 containerd[2022]: time="2024-10-08T19:31:25.609560657Z" level=info msg="StopPodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\"" Oct 8 19:31:25.850858 systemd-networkd[1845]: cali3984cd44739: Gained IPv6LL Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.761 [INFO][4896] k8s.go 608: Cleaning up netns ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.761 [INFO][4896] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" iface="eth0" netns="/var/run/netns/cni-3abf3027-7b40-bea0-c63e-8653ce78242f" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.762 [INFO][4896] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" iface="eth0" netns="/var/run/netns/cni-3abf3027-7b40-bea0-c63e-8653ce78242f" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.763 [INFO][4896] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" iface="eth0" netns="/var/run/netns/cni-3abf3027-7b40-bea0-c63e-8653ce78242f" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.764 [INFO][4896] k8s.go 615: Releasing IP address(es) ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.764 [INFO][4896] utils.go 188: Calico CNI releasing IP address ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.859 [INFO][4904] ipam_plugin.go 417: Releasing address using handleID ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.860 [INFO][4904] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.860 [INFO][4904] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.883 [WARNING][4904] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.883 [INFO][4904] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.887 [INFO][4904] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:25.899406 containerd[2022]: 2024-10-08 19:31:25.893 [INFO][4896] k8s.go 621: Teardown processing complete. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:25.904267 containerd[2022]: time="2024-10-08T19:31:25.903400582Z" level=info msg="TearDown network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" successfully" Oct 8 19:31:25.904267 containerd[2022]: time="2024-10-08T19:31:25.903478441Z" level=info msg="StopPodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" returns successfully" Oct 8 19:31:25.906931 containerd[2022]: time="2024-10-08T19:31:25.906866702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw82g,Uid:722ae81b-5bf7-40e0-b53d-6784c26bbee7,Namespace:calico-system,Attempt:1,}" Oct 8 19:31:25.914450 systemd[1]: run-netns-cni\x2d3abf3027\x2d7b40\x2dbea0\x2dc63e\x2d8653ce78242f.mount: Deactivated successfully. Oct 8 19:31:26.225693 (udev-worker)[4712]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:31:26.228888 systemd-networkd[1845]: vxlan.calico: Link UP Oct 8 19:31:26.228896 systemd-networkd[1845]: vxlan.calico: Gained carrier Oct 8 19:31:26.304659 systemd-networkd[1845]: calia2b37f852c0: Link UP Oct 8 19:31:26.307509 systemd-networkd[1845]: calia2b37f852c0: Gained carrier Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.073 [INFO][4928] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0 csi-node-driver- calico-system 722ae81b-5bf7-40e0-b53d-6784c26bbee7 780 0 2024-10-08 19:31:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-26-181 csi-node-driver-zw82g eth0 default [] [] [kns.calico-system ksa.calico-system.default] calia2b37f852c0 [] []}} ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.073 [INFO][4928] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.164 [INFO][4938] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" HandleID="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.186 [INFO][4938] ipam_plugin.go 270: Auto assigning IP ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" HandleID="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000383550), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-181", "pod":"csi-node-driver-zw82g", "timestamp":"2024-10-08 19:31:26.164693393 +0000 UTC"}, Hostname:"ip-172-31-26-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.187 [INFO][4938] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.187 [INFO][4938] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.187 [INFO][4938] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-181' Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.194 [INFO][4938] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.216 [INFO][4938] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.240 [INFO][4938] ipam.go 489: Trying affinity for 192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.246 [INFO][4938] ipam.go 155: Attempting to load block cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.255 [INFO][4938] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.255 [INFO][4938] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.259 [INFO][4938] ipam.go 1685: Creating new handle: k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.276 [INFO][4938] ipam.go 1203: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.294 [INFO][4938] ipam.go 1216: Successfully claimed IPs: [192.168.84.2/26] block=192.168.84.0/26 handle="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.295 [INFO][4938] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.84.2/26] handle="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" host="ip-172-31-26-181" Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.295 [INFO][4938] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:26.347565 containerd[2022]: 2024-10-08 19:31:26.295 [INFO][4938] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.84.2/26] IPv6=[] ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" HandleID="k8s-pod-network.56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.351115 containerd[2022]: 2024-10-08 19:31:26.299 [INFO][4928] k8s.go 386: Populated endpoint ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"722ae81b-5bf7-40e0-b53d-6784c26bbee7", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"", Pod:"csi-node-driver-zw82g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia2b37f852c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:26.351115 containerd[2022]: 2024-10-08 19:31:26.299 [INFO][4928] k8s.go 387: Calico CNI using IPs: [192.168.84.2/32] ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.351115 containerd[2022]: 2024-10-08 19:31:26.299 [INFO][4928] dataplane_linux.go 68: Setting the host side veth name to calia2b37f852c0 ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.351115 containerd[2022]: 2024-10-08 19:31:26.308 [INFO][4928] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.351115 containerd[2022]: 2024-10-08 19:31:26.309 [INFO][4928] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"722ae81b-5bf7-40e0-b53d-6784c26bbee7", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e", Pod:"csi-node-driver-zw82g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia2b37f852c0", MAC:"7a:97:fa:82:a3:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:26.351115 containerd[2022]: 2024-10-08 19:31:26.334 [INFO][4928] k8s.go 500: Wrote updated endpoint to datastore ContainerID="56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e" Namespace="calico-system" Pod="csi-node-driver-zw82g" WorkloadEndpoint="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:26.416256 containerd[2022]: time="2024-10-08T19:31:26.415562534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:31:26.416256 containerd[2022]: time="2024-10-08T19:31:26.416080761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:26.417361 containerd[2022]: time="2024-10-08T19:31:26.417232929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:31:26.417361 containerd[2022]: time="2024-10-08T19:31:26.417289033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:26.476722 systemd[1]: Started cri-containerd-56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e.scope - libcontainer container 56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e. Oct 8 19:31:26.531717 containerd[2022]: time="2024-10-08T19:31:26.531663473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw82g,Uid:722ae81b-5bf7-40e0-b53d-6784c26bbee7,Namespace:calico-system,Attempt:1,} returns sandbox id \"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e\"" Oct 8 19:31:26.542477 containerd[2022]: time="2024-10-08T19:31:26.542373473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:31:26.611676 containerd[2022]: time="2024-10-08T19:31:26.611184722Z" level=info msg="StopPodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\"" Oct 8 19:31:26.613521 containerd[2022]: time="2024-10-08T19:31:26.611353166Z" level=info msg="StopPodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\"" Oct 8 19:31:26.918988 systemd[1]: run-containerd-runc-k8s.io-56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e-runc.vuQbYo.mount: Deactivated successfully. Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.797 [INFO][5043] k8s.go 608: Cleaning up netns ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.797 [INFO][5043] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" iface="eth0" netns="/var/run/netns/cni-df2c761e-8cf0-41de-2f2f-b75f4976a332" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.798 [INFO][5043] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" iface="eth0" netns="/var/run/netns/cni-df2c761e-8cf0-41de-2f2f-b75f4976a332" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.798 [INFO][5043] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" iface="eth0" netns="/var/run/netns/cni-df2c761e-8cf0-41de-2f2f-b75f4976a332" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.798 [INFO][5043] k8s.go 615: Releasing IP address(es) ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.798 [INFO][5043] utils.go 188: Calico CNI releasing IP address ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.904 [INFO][5063] ipam_plugin.go 417: Releasing address using handleID ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.904 [INFO][5063] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.904 [INFO][5063] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.936 [WARNING][5063] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.936 [INFO][5063] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.952 [INFO][5063] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:26.979896 containerd[2022]: 2024-10-08 19:31:26.965 [INFO][5043] k8s.go 621: Teardown processing complete. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:26.979896 containerd[2022]: time="2024-10-08T19:31:26.968286410Z" level=info msg="TearDown network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" successfully" Oct 8 19:31:26.979896 containerd[2022]: time="2024-10-08T19:31:26.968329488Z" level=info msg="StopPodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" returns successfully" Oct 8 19:31:26.979896 containerd[2022]: time="2024-10-08T19:31:26.969551554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlxwm,Uid:1cb798ed-2ab0-4a6d-84b4-808d1cf3653e,Namespace:kube-system,Attempt:1,}" Oct 8 19:31:26.993644 systemd[1]: run-netns-cni\x2ddf2c761e\x2d8cf0\x2d41de\x2d2f2f\x2db75f4976a332.mount: Deactivated successfully. Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.781 [INFO][5044] k8s.go 608: Cleaning up netns ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.781 [INFO][5044] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" iface="eth0" netns="/var/run/netns/cni-5d9c9abd-a388-222b-997a-b7736fd4d09c" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.781 [INFO][5044] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" iface="eth0" netns="/var/run/netns/cni-5d9c9abd-a388-222b-997a-b7736fd4d09c" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.782 [INFO][5044] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" iface="eth0" netns="/var/run/netns/cni-5d9c9abd-a388-222b-997a-b7736fd4d09c" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.782 [INFO][5044] k8s.go 615: Releasing IP address(es) ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.782 [INFO][5044] utils.go 188: Calico CNI releasing IP address ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.902 [INFO][5059] ipam_plugin.go 417: Releasing address using handleID ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.906 [INFO][5059] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.952 [INFO][5059] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.987 [WARNING][5059] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.987 [INFO][5059] ipam_plugin.go 445: Releasing address using workloadID ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:26.992 [INFO][5059] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:27.044360 containerd[2022]: 2024-10-08 19:31:27.002 [INFO][5044] k8s.go 621: Teardown processing complete. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:27.048636 containerd[2022]: time="2024-10-08T19:31:27.047224936Z" level=info msg="TearDown network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" successfully" Oct 8 19:31:27.048636 containerd[2022]: time="2024-10-08T19:31:27.047274701Z" level=info msg="StopPodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" returns successfully" Oct 8 19:31:27.050597 containerd[2022]: time="2024-10-08T19:31:27.049650592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb464564c-vrn29,Uid:8a2d3aef-c6d0-496a-87a4-07be591e7fdd,Namespace:calico-system,Attempt:1,}" Oct 8 19:31:27.060106 systemd[1]: run-netns-cni\x2d5d9c9abd\x2da388\x2d222b\x2d997a\x2db7736fd4d09c.mount: Deactivated successfully. Oct 8 19:31:27.452858 (udev-worker)[4977]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:31:27.457962 systemd-networkd[1845]: cali4584102755e: Link UP Oct 8 19:31:27.463340 systemd-networkd[1845]: cali4584102755e: Gained carrier Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.193 [INFO][5096] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0 calico-kube-controllers-6fb464564c- calico-system 8a2d3aef-c6d0-496a-87a4-07be591e7fdd 793 0 2024-10-08 19:31:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fb464564c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-181 calico-kube-controllers-6fb464564c-vrn29 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4584102755e [] []}} ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.194 [INFO][5096] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.319 [INFO][5114] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" HandleID="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.360 [INFO][5114] ipam_plugin.go 270: Auto assigning IP ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" HandleID="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031a1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-181", "pod":"calico-kube-controllers-6fb464564c-vrn29", "timestamp":"2024-10-08 19:31:27.318938691 +0000 UTC"}, Hostname:"ip-172-31-26-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.360 [INFO][5114] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.360 [INFO][5114] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.360 [INFO][5114] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-181' Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.365 [INFO][5114] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.376 [INFO][5114] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.392 [INFO][5114] ipam.go 489: Trying affinity for 192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.399 [INFO][5114] ipam.go 155: Attempting to load block cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.411 [INFO][5114] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.412 [INFO][5114] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.420 [INFO][5114] ipam.go 1685: Creating new handle: k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1 Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.428 [INFO][5114] ipam.go 1203: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.442 [INFO][5114] ipam.go 1216: Successfully claimed IPs: [192.168.84.3/26] block=192.168.84.0/26 handle="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.442 [INFO][5114] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.84.3/26] handle="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" host="ip-172-31-26-181" Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.442 [INFO][5114] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:27.532303 containerd[2022]: 2024-10-08 19:31:27.443 [INFO][5114] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.84.3/26] IPv6=[] ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" HandleID="k8s-pod-network.1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.537435 containerd[2022]: 2024-10-08 19:31:27.449 [INFO][5096] k8s.go 386: Populated endpoint ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0", GenerateName:"calico-kube-controllers-6fb464564c-", Namespace:"calico-system", SelfLink:"", UID:"8a2d3aef-c6d0-496a-87a4-07be591e7fdd", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb464564c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"", Pod:"calico-kube-controllers-6fb464564c-vrn29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4584102755e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:27.537435 containerd[2022]: 2024-10-08 19:31:27.449 [INFO][5096] k8s.go 387: Calico CNI using IPs: [192.168.84.3/32] ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.537435 containerd[2022]: 2024-10-08 19:31:27.449 [INFO][5096] dataplane_linux.go 68: Setting the host side veth name to cali4584102755e ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.537435 containerd[2022]: 2024-10-08 19:31:27.464 [INFO][5096] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.537435 containerd[2022]: 2024-10-08 19:31:27.471 [INFO][5096] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0", GenerateName:"calico-kube-controllers-6fb464564c-", Namespace:"calico-system", SelfLink:"", UID:"8a2d3aef-c6d0-496a-87a4-07be591e7fdd", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb464564c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1", Pod:"calico-kube-controllers-6fb464564c-vrn29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4584102755e", MAC:"66:ae:e5:96:02:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:27.537435 containerd[2022]: 2024-10-08 19:31:27.522 [INFO][5096] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1" Namespace="calico-system" Pod="calico-kube-controllers-6fb464564c-vrn29" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:27.670822 systemd[1]: Started sshd@8-172.31.26.181:22-139.178.68.195:59818.service - OpenSSH per-connection server daemon (139.178.68.195:59818). Oct 8 19:31:27.673907 systemd-networkd[1845]: cali726836978e9: Link UP Oct 8 19:31:27.679431 systemd-networkd[1845]: cali726836978e9: Gained carrier Oct 8 19:31:27.697908 containerd[2022]: time="2024-10-08T19:31:27.697315516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:31:27.697908 containerd[2022]: time="2024-10-08T19:31:27.697412392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:27.697908 containerd[2022]: time="2024-10-08T19:31:27.697454437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:31:27.697908 containerd[2022]: time="2024-10-08T19:31:27.697488270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.223 [INFO][5084] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0 coredns-7db6d8ff4d- kube-system 1cb798ed-2ab0-4a6d-84b4-808d1cf3653e 794 0 2024-10-08 19:30:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-181 coredns-7db6d8ff4d-mlxwm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali726836978e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.224 [INFO][5084] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.373 [INFO][5119] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" HandleID="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.404 [INFO][5119] ipam_plugin.go 270: Auto assigning IP ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" HandleID="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000e0560), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-181", "pod":"coredns-7db6d8ff4d-mlxwm", "timestamp":"2024-10-08 19:31:27.373494976 +0000 UTC"}, Hostname:"ip-172-31-26-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.404 [INFO][5119] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.442 [INFO][5119] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.442 [INFO][5119] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-181' Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.448 [INFO][5119] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.485 [INFO][5119] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.500 [INFO][5119] ipam.go 489: Trying affinity for 192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.527 [INFO][5119] ipam.go 155: Attempting to load block cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.544 [INFO][5119] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.546 [INFO][5119] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.555 [INFO][5119] ipam.go 1685: Creating new handle: k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.578 [INFO][5119] ipam.go 1203: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.624 [INFO][5119] ipam.go 1216: Successfully claimed IPs: [192.168.84.4/26] block=192.168.84.0/26 handle="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.632 [INFO][5119] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.84.4/26] handle="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" host="ip-172-31-26-181" Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.632 [INFO][5119] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:27.738226 containerd[2022]: 2024-10-08 19:31:27.632 [INFO][5119] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.84.4/26] IPv6=[] ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" HandleID="k8s-pod-network.a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.739724 containerd[2022]: 2024-10-08 19:31:27.644 [INFO][5084] k8s.go 386: Populated endpoint ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"", Pod:"coredns-7db6d8ff4d-mlxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali726836978e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:27.739724 containerd[2022]: 2024-10-08 19:31:27.648 [INFO][5084] k8s.go 387: Calico CNI using IPs: [192.168.84.4/32] ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.739724 containerd[2022]: 2024-10-08 19:31:27.648 [INFO][5084] dataplane_linux.go 68: Setting the host side veth name to cali726836978e9 ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.739724 containerd[2022]: 2024-10-08 19:31:27.678 [INFO][5084] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.739724 containerd[2022]: 2024-10-08 19:31:27.681 [INFO][5084] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf", Pod:"coredns-7db6d8ff4d-mlxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali726836978e9", MAC:"be:67:1f:31:c3:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:27.739724 containerd[2022]: 2024-10-08 19:31:27.722 [INFO][5084] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mlxwm" WorkloadEndpoint="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:27.771582 systemd[1]: Started cri-containerd-1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1.scope - libcontainer container 1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1. Oct 8 19:31:27.847236 containerd[2022]: time="2024-10-08T19:31:27.846855841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:31:27.847236 containerd[2022]: time="2024-10-08T19:31:27.846984822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:27.847236 containerd[2022]: time="2024-10-08T19:31:27.847055705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:31:27.847236 containerd[2022]: time="2024-10-08T19:31:27.847092360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:31:27.918953 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 59818 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:27.925583 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:27.961360 systemd[1]: Started cri-containerd-a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf.scope - libcontainer container a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf. Oct 8 19:31:27.969848 systemd-logind[1990]: New session 9 of user core. Oct 8 19:31:27.975061 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:31:28.094605 systemd-networkd[1845]: vxlan.calico: Gained IPv6LL Oct 8 19:31:28.156267 containerd[2022]: time="2024-10-08T19:31:28.156186724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb464564c-vrn29,Uid:8a2d3aef-c6d0-496a-87a4-07be591e7fdd,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1\"" Oct 8 19:31:28.174777 containerd[2022]: time="2024-10-08T19:31:28.174720014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlxwm,Uid:1cb798ed-2ab0-4a6d-84b4-808d1cf3653e,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf\"" Oct 8 19:31:28.193339 containerd[2022]: time="2024-10-08T19:31:28.193269752Z" level=info msg="CreateContainer within sandbox \"a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:31:28.217808 systemd-networkd[1845]: calia2b37f852c0: Gained IPv6LL Oct 8 19:31:28.280237 containerd[2022]: time="2024-10-08T19:31:28.276060788Z" level=info msg="CreateContainer within sandbox \"a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a63c5cffbeb573aa76d1a3b839285e68fe9a8e9b6d2b49cc0699d8679710a5f\"" Oct 8 19:31:28.280237 containerd[2022]: time="2024-10-08T19:31:28.278485676Z" level=info msg="StartContainer for \"3a63c5cffbeb573aa76d1a3b839285e68fe9a8e9b6d2b49cc0699d8679710a5f\"" Oct 8 19:31:28.410487 sshd[5161]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:28.413620 systemd[1]: Started cri-containerd-3a63c5cffbeb573aa76d1a3b839285e68fe9a8e9b6d2b49cc0699d8679710a5f.scope - libcontainer container 3a63c5cffbeb573aa76d1a3b839285e68fe9a8e9b6d2b49cc0699d8679710a5f. Oct 8 19:31:28.422449 systemd[1]: sshd@8-172.31.26.181:22-139.178.68.195:59818.service: Deactivated successfully. Oct 8 19:31:28.431157 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:31:28.439728 systemd-logind[1990]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:31:28.443617 systemd-logind[1990]: Removed session 9. Oct 8 19:31:28.502653 containerd[2022]: time="2024-10-08T19:31:28.501170300Z" level=info msg="StartContainer for \"3a63c5cffbeb573aa76d1a3b839285e68fe9a8e9b6d2b49cc0699d8679710a5f\" returns successfully" Oct 8 19:31:28.612397 containerd[2022]: time="2024-10-08T19:31:28.612337156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:28.615297 containerd[2022]: time="2024-10-08T19:31:28.615248599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:31:28.616509 containerd[2022]: time="2024-10-08T19:31:28.616412233Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:28.620912 containerd[2022]: time="2024-10-08T19:31:28.620810955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:28.622526 containerd[2022]: time="2024-10-08T19:31:28.622353858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 2.079905636s" Oct 8 19:31:28.622526 containerd[2022]: time="2024-10-08T19:31:28.622409086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:31:28.625676 containerd[2022]: time="2024-10-08T19:31:28.625306782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:31:28.628032 containerd[2022]: time="2024-10-08T19:31:28.627979125Z" level=info msg="CreateContainer within sandbox \"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:31:28.667614 containerd[2022]: time="2024-10-08T19:31:28.667463930Z" level=info msg="CreateContainer within sandbox \"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c2111a71773a5cfdda94535622e745223e304f93b24b187045618c5f9d6e94be\"" Oct 8 19:31:28.669960 containerd[2022]: time="2024-10-08T19:31:28.669876140Z" level=info msg="StartContainer for \"c2111a71773a5cfdda94535622e745223e304f93b24b187045618c5f9d6e94be\"" Oct 8 19:31:28.728729 systemd[1]: Started cri-containerd-c2111a71773a5cfdda94535622e745223e304f93b24b187045618c5f9d6e94be.scope - libcontainer container c2111a71773a5cfdda94535622e745223e304f93b24b187045618c5f9d6e94be. Oct 8 19:31:28.784448 containerd[2022]: time="2024-10-08T19:31:28.784318233Z" level=info msg="StartContainer for \"c2111a71773a5cfdda94535622e745223e304f93b24b187045618c5f9d6e94be\" returns successfully" Oct 8 19:31:28.794831 systemd-networkd[1845]: cali726836978e9: Gained IPv6LL Oct 8 19:31:28.986028 systemd-networkd[1845]: cali4584102755e: Gained IPv6LL Oct 8 19:31:29.068277 kubelet[3248]: I1008 19:31:29.068145 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mlxwm" podStartSLOduration=38.06812203 podStartE2EDuration="38.06812203s" podCreationTimestamp="2024-10-08 19:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:31:29.066539447 +0000 UTC m=+50.675177352" watchObservedRunningTime="2024-10-08 19:31:29.06812203 +0000 UTC m=+50.676759911" Oct 8 19:31:31.965396 ntpd[1985]: Listen normally on 7 vxlan.calico 192.168.84.0:123 Oct 8 19:31:31.967582 ntpd[1985]: 8 Oct 19:31:31 ntpd[1985]: Listen normally on 7 vxlan.calico 192.168.84.0:123 Oct 8 19:31:31.967582 ntpd[1985]: 8 Oct 19:31:31 ntpd[1985]: Listen normally on 8 cali3984cd44739 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 8 19:31:31.967582 ntpd[1985]: 8 Oct 19:31:31 ntpd[1985]: Listen normally on 9 vxlan.calico [fe80::6431:4fff:feda:5ea2%5]:123 Oct 8 19:31:31.967582 ntpd[1985]: 8 Oct 19:31:31 ntpd[1985]: Listen normally on 10 calia2b37f852c0 [fe80::ecee:eeff:feee:eeee%6]:123 Oct 8 19:31:31.967582 ntpd[1985]: 8 Oct 19:31:31 ntpd[1985]: Listen normally on 11 cali4584102755e [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:31:31.967582 ntpd[1985]: 8 Oct 19:31:31 ntpd[1985]: Listen normally on 12 cali726836978e9 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:31:31.966244 ntpd[1985]: Listen normally on 8 cali3984cd44739 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 8 19:31:31.966353 ntpd[1985]: Listen normally on 9 vxlan.calico [fe80::6431:4fff:feda:5ea2%5]:123 Oct 8 19:31:31.966428 ntpd[1985]: Listen normally on 10 calia2b37f852c0 [fe80::ecee:eeff:feee:eeee%6]:123 Oct 8 19:31:31.966523 ntpd[1985]: Listen normally on 11 cali4584102755e [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:31:31.966597 ntpd[1985]: Listen normally on 12 cali726836978e9 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:31:32.050966 containerd[2022]: time="2024-10-08T19:31:32.050271888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:32.058878 containerd[2022]: time="2024-10-08T19:31:32.058823352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:31:32.062315 containerd[2022]: time="2024-10-08T19:31:32.060993360Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:32.067479 containerd[2022]: time="2024-10-08T19:31:32.067417056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:32.072882 containerd[2022]: time="2024-10-08T19:31:32.071053884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 3.445663597s" Oct 8 19:31:32.073137 containerd[2022]: time="2024-10-08T19:31:32.073096536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:31:32.075153 containerd[2022]: time="2024-10-08T19:31:32.075106608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:31:32.123347 containerd[2022]: time="2024-10-08T19:31:32.123279793Z" level=info msg="CreateContainer within sandbox \"1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:31:32.157004 containerd[2022]: time="2024-10-08T19:31:32.156838597Z" level=info msg="CreateContainer within sandbox \"1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d56437c995246bdb36c281ce17aa32c09da944b5a062de827ec2b8fa9c719065\"" Oct 8 19:31:32.162220 containerd[2022]: time="2024-10-08T19:31:32.159780349Z" level=info msg="StartContainer for \"d56437c995246bdb36c281ce17aa32c09da944b5a062de827ec2b8fa9c719065\"" Oct 8 19:31:32.167425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507564685.mount: Deactivated successfully. Oct 8 19:31:32.256257 systemd[1]: Started cri-containerd-d56437c995246bdb36c281ce17aa32c09da944b5a062de827ec2b8fa9c719065.scope - libcontainer container d56437c995246bdb36c281ce17aa32c09da944b5a062de827ec2b8fa9c719065. Oct 8 19:31:32.405169 containerd[2022]: time="2024-10-08T19:31:32.404501090Z" level=info msg="StartContainer for \"d56437c995246bdb36c281ce17aa32c09da944b5a062de827ec2b8fa9c719065\" returns successfully" Oct 8 19:31:33.291532 kubelet[3248]: I1008 19:31:33.291410 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fb464564c-vrn29" podStartSLOduration=29.399519531 podStartE2EDuration="33.290965106s" podCreationTimestamp="2024-10-08 19:31:00 +0000 UTC" firstStartedPulling="2024-10-08 19:31:28.183025212 +0000 UTC m=+49.791663094" lastFinishedPulling="2024-10-08 19:31:32.074470788 +0000 UTC m=+53.683108669" observedRunningTime="2024-10-08 19:31:33.157529558 +0000 UTC m=+54.766167475" watchObservedRunningTime="2024-10-08 19:31:33.290965106 +0000 UTC m=+54.899603047" Oct 8 19:31:33.461870 systemd[1]: Started sshd@9-172.31.26.181:22-139.178.68.195:51492.service - OpenSSH per-connection server daemon (139.178.68.195:51492). Oct 8 19:31:33.647673 sshd[5416]: Accepted publickey for core from 139.178.68.195 port 51492 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:33.652758 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:33.672288 systemd-logind[1990]: New session 10 of user core. Oct 8 19:31:33.678568 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:31:33.983914 sshd[5416]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:33.991242 systemd[1]: sshd@9-172.31.26.181:22-139.178.68.195:51492.service: Deactivated successfully. Oct 8 19:31:33.999266 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:31:34.008139 systemd-logind[1990]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:31:34.012695 systemd-logind[1990]: Removed session 10. Oct 8 19:31:34.724030 containerd[2022]: time="2024-10-08T19:31:34.723966222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:34.729001 containerd[2022]: time="2024-10-08T19:31:34.728926362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:31:34.730543 containerd[2022]: time="2024-10-08T19:31:34.730450650Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:34.737331 containerd[2022]: time="2024-10-08T19:31:34.737260710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:34.741227 containerd[2022]: time="2024-10-08T19:31:34.739273326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 2.663870162s" Oct 8 19:31:34.741227 containerd[2022]: time="2024-10-08T19:31:34.739342854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:31:34.748009 containerd[2022]: time="2024-10-08T19:31:34.747671142Z" level=info msg="CreateContainer within sandbox \"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:31:34.782969 containerd[2022]: time="2024-10-08T19:31:34.782760594Z" level=info msg="CreateContainer within sandbox \"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4b0b252e1ef7c50baf7ce672a4a83f99a0ba385a1362b39cb67c1d8ef7361fc3\"" Oct 8 19:31:34.787146 containerd[2022]: time="2024-10-08T19:31:34.786506910Z" level=info msg="StartContainer for \"4b0b252e1ef7c50baf7ce672a4a83f99a0ba385a1362b39cb67c1d8ef7361fc3\"" Oct 8 19:31:34.879539 systemd[1]: Started cri-containerd-4b0b252e1ef7c50baf7ce672a4a83f99a0ba385a1362b39cb67c1d8ef7361fc3.scope - libcontainer container 4b0b252e1ef7c50baf7ce672a4a83f99a0ba385a1362b39cb67c1d8ef7361fc3. Oct 8 19:31:34.983242 containerd[2022]: time="2024-10-08T19:31:34.983134639Z" level=info msg="StartContainer for \"4b0b252e1ef7c50baf7ce672a4a83f99a0ba385a1362b39cb67c1d8ef7361fc3\" returns successfully" Oct 8 19:31:35.115869 kubelet[3248]: I1008 19:31:35.115761 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zw82g" podStartSLOduration=26.912684408 podStartE2EDuration="35.115740304s" podCreationTimestamp="2024-10-08 19:31:00 +0000 UTC" firstStartedPulling="2024-10-08 19:31:26.540294502 +0000 UTC m=+48.148932407" lastFinishedPulling="2024-10-08 19:31:34.743350434 +0000 UTC m=+56.351988303" observedRunningTime="2024-10-08 19:31:35.11536222 +0000 UTC m=+56.724000125" watchObservedRunningTime="2024-10-08 19:31:35.115740304 +0000 UTC m=+56.724378185" Oct 8 19:31:35.960171 kubelet[3248]: I1008 19:31:35.959619 3248 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:31:35.960171 kubelet[3248]: I1008 19:31:35.959662 3248 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:31:38.671326 containerd[2022]: time="2024-10-08T19:31:38.671225781Z" level=info msg="StopPodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\"" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.737 [WARNING][5519] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"722ae81b-5bf7-40e0-b53d-6784c26bbee7", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e", Pod:"csi-node-driver-zw82g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia2b37f852c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.738 [INFO][5519] k8s.go 608: Cleaning up netns ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.738 [INFO][5519] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" iface="eth0" netns="" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.738 [INFO][5519] k8s.go 615: Releasing IP address(es) ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.738 [INFO][5519] utils.go 188: Calico CNI releasing IP address ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.773 [INFO][5525] ipam_plugin.go 417: Releasing address using handleID ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.773 [INFO][5525] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.773 [INFO][5525] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.785 [WARNING][5525] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.785 [INFO][5525] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.787 [INFO][5525] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:38.792693 containerd[2022]: 2024-10-08 19:31:38.789 [INFO][5519] k8s.go 621: Teardown processing complete. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.793795 containerd[2022]: time="2024-10-08T19:31:38.793350106Z" level=info msg="TearDown network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" successfully" Oct 8 19:31:38.793795 containerd[2022]: time="2024-10-08T19:31:38.793392226Z" level=info msg="StopPodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" returns successfully" Oct 8 19:31:38.794572 containerd[2022]: time="2024-10-08T19:31:38.794286166Z" level=info msg="RemovePodSandbox for \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\"" Oct 8 19:31:38.794572 containerd[2022]: time="2024-10-08T19:31:38.794334406Z" level=info msg="Forcibly stopping sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\"" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.855 [WARNING][5543] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"722ae81b-5bf7-40e0-b53d-6784c26bbee7", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"56a430c833a481b971e1016fa0159653fffdcf894adb5facf8b443c9aa30368e", Pod:"csi-node-driver-zw82g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia2b37f852c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.855 [INFO][5543] k8s.go 608: Cleaning up netns ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.855 [INFO][5543] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" iface="eth0" netns="" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.855 [INFO][5543] k8s.go 615: Releasing IP address(es) ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.855 [INFO][5543] utils.go 188: Calico CNI releasing IP address ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.893 [INFO][5549] ipam_plugin.go 417: Releasing address using handleID ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.893 [INFO][5549] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.893 [INFO][5549] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.906 [WARNING][5549] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.906 [INFO][5549] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" HandleID="k8s-pod-network.d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Workload="ip--172--31--26--181-k8s-csi--node--driver--zw82g-eth0" Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.909 [INFO][5549] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:38.914306 containerd[2022]: 2024-10-08 19:31:38.911 [INFO][5543] k8s.go 621: Teardown processing complete. ContainerID="d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e" Oct 8 19:31:38.914306 containerd[2022]: time="2024-10-08T19:31:38.914146126Z" level=info msg="TearDown network for sandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" successfully" Oct 8 19:31:38.919019 containerd[2022]: time="2024-10-08T19:31:38.918943918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:31:38.919174 containerd[2022]: time="2024-10-08T19:31:38.919056382Z" level=info msg="RemovePodSandbox \"d9be9d37b0aec42e3c27a72cad2926ddaaa1edc5274c96efc58e022faa99526e\" returns successfully" Oct 8 19:31:38.920236 containerd[2022]: time="2024-10-08T19:31:38.919816138Z" level=info msg="StopPodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\"" Oct 8 19:31:39.024831 systemd[1]: Started sshd@10-172.31.26.181:22-139.178.68.195:51506.service - OpenSSH per-connection server daemon (139.178.68.195:51506). Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:38.997 [WARNING][5567] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d74a8c9-7cfd-4102-a721-6998bf392d12", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced", Pod:"coredns-7db6d8ff4d-cgpbr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3984cd44739", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:38.997 [INFO][5567] k8s.go 608: Cleaning up netns ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:38.997 [INFO][5567] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" iface="eth0" netns="" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:38.997 [INFO][5567] k8s.go 615: Releasing IP address(es) ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:38.998 [INFO][5567] utils.go 188: Calico CNI releasing IP address ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.047 [INFO][5573] ipam_plugin.go 417: Releasing address using handleID ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.047 [INFO][5573] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.048 [INFO][5573] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.060 [WARNING][5573] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.060 [INFO][5573] ipam_plugin.go 445: Releasing address using workloadID ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.062 [INFO][5573] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:39.067558 containerd[2022]: 2024-10-08 19:31:39.065 [INFO][5567] k8s.go 621: Teardown processing complete. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.069339 containerd[2022]: time="2024-10-08T19:31:39.067610767Z" level=info msg="TearDown network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" successfully" Oct 8 19:31:39.069339 containerd[2022]: time="2024-10-08T19:31:39.067669327Z" level=info msg="StopPodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" returns successfully" Oct 8 19:31:39.069339 containerd[2022]: time="2024-10-08T19:31:39.068229955Z" level=info msg="RemovePodSandbox for \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\"" Oct 8 19:31:39.069339 containerd[2022]: time="2024-10-08T19:31:39.068279047Z" level=info msg="Forcibly stopping sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\"" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.148 [WARNING][5595] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3d74a8c9-7cfd-4102-a721-6998bf392d12", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"97b0eaa517497a2d007ed134b50565329f97e2b40624e2b601f7a556fe880ced", Pod:"coredns-7db6d8ff4d-cgpbr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3984cd44739", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.149 [INFO][5595] k8s.go 608: Cleaning up netns ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.149 [INFO][5595] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" iface="eth0" netns="" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.149 [INFO][5595] k8s.go 615: Releasing IP address(es) ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.149 [INFO][5595] utils.go 188: Calico CNI releasing IP address ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.186 [INFO][5601] ipam_plugin.go 417: Releasing address using handleID ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.187 [INFO][5601] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.187 [INFO][5601] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.199 [WARNING][5601] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.199 [INFO][5601] ipam_plugin.go 445: Releasing address using workloadID ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" HandleID="k8s-pod-network.82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--cgpbr-eth0" Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.201 [INFO][5601] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:39.206832 containerd[2022]: 2024-10-08 19:31:39.204 [INFO][5595] k8s.go 621: Teardown processing complete. ContainerID="82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127" Oct 8 19:31:39.208461 containerd[2022]: time="2024-10-08T19:31:39.208001312Z" level=info msg="TearDown network for sandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" successfully" Oct 8 19:31:39.213410 containerd[2022]: time="2024-10-08T19:31:39.213338600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:31:39.213554 containerd[2022]: time="2024-10-08T19:31:39.213447044Z" level=info msg="RemovePodSandbox \"82aa85ea18667748e16c003b3a9dce3ba5d8ac9a12d866a296266d2d3929a127\" returns successfully" Oct 8 19:31:39.214486 containerd[2022]: time="2024-10-08T19:31:39.214142984Z" level=info msg="StopPodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\"" Oct 8 19:31:39.215112 sshd[5578]: Accepted publickey for core from 139.178.68.195 port 51506 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:39.218830 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:39.231318 systemd-logind[1990]: New session 11 of user core. Oct 8 19:31:39.239166 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.327 [WARNING][5619] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf", Pod:"coredns-7db6d8ff4d-mlxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali726836978e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.329 [INFO][5619] k8s.go 608: Cleaning up netns ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.329 [INFO][5619] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" iface="eth0" netns="" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.329 [INFO][5619] k8s.go 615: Releasing IP address(es) ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.329 [INFO][5619] utils.go 188: Calico CNI releasing IP address ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.409 [INFO][5627] ipam_plugin.go 417: Releasing address using handleID ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.409 [INFO][5627] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.409 [INFO][5627] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.423 [WARNING][5627] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.424 [INFO][5627] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.427 [INFO][5627] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:39.436334 containerd[2022]: 2024-10-08 19:31:39.430 [INFO][5619] k8s.go 621: Teardown processing complete. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.438094 containerd[2022]: time="2024-10-08T19:31:39.437330745Z" level=info msg="TearDown network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" successfully" Oct 8 19:31:39.438094 containerd[2022]: time="2024-10-08T19:31:39.437411409Z" level=info msg="StopPodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" returns successfully" Oct 8 19:31:39.439102 containerd[2022]: time="2024-10-08T19:31:39.438376185Z" level=info msg="RemovePodSandbox for \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\"" Oct 8 19:31:39.439102 containerd[2022]: time="2024-10-08T19:31:39.438429321Z" level=info msg="Forcibly stopping sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\"" Oct 8 19:31:39.556391 sshd[5578]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:39.564957 systemd[1]: sshd@10-172.31.26.181:22-139.178.68.195:51506.service: Deactivated successfully. Oct 8 19:31:39.572590 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:31:39.579010 systemd-logind[1990]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:31:39.600684 systemd[1]: Started sshd@11-172.31.26.181:22-139.178.68.195:51522.service - OpenSSH per-connection server daemon (139.178.68.195:51522). Oct 8 19:31:39.604174 systemd-logind[1990]: Removed session 11. Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.531 [WARNING][5653] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1cb798ed-2ab0-4a6d-84b4-808d1cf3653e", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 30, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"a8c3dfd936c3479a04bdd5756aa7409c3c7d03e2392809e718355db615d2badf", Pod:"coredns-7db6d8ff4d-mlxwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali726836978e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.532 [INFO][5653] k8s.go 608: Cleaning up netns ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.532 [INFO][5653] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" iface="eth0" netns="" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.533 [INFO][5653] k8s.go 615: Releasing IP address(es) ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.533 [INFO][5653] utils.go 188: Calico CNI releasing IP address ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.578 [INFO][5659] ipam_plugin.go 417: Releasing address using handleID ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.579 [INFO][5659] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.579 [INFO][5659] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.597 [WARNING][5659] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.598 [INFO][5659] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" HandleID="k8s-pod-network.5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Workload="ip--172--31--26--181-k8s-coredns--7db6d8ff4d--mlxwm-eth0" Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.603 [INFO][5659] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:39.612128 containerd[2022]: 2024-10-08 19:31:39.608 [INFO][5653] k8s.go 621: Teardown processing complete. ContainerID="5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a" Oct 8 19:31:39.613174 containerd[2022]: time="2024-10-08T19:31:39.612288262Z" level=info msg="TearDown network for sandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" successfully" Oct 8 19:31:39.620354 containerd[2022]: time="2024-10-08T19:31:39.619912618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:31:39.620354 containerd[2022]: time="2024-10-08T19:31:39.620016250Z" level=info msg="RemovePodSandbox \"5f1b887b049fe2ef48e027b5cdaabf27319252f237750cb73b3cf783712efe7a\" returns successfully" Oct 8 19:31:39.621439 containerd[2022]: time="2024-10-08T19:31:39.620918998Z" level=info msg="StopPodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\"" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.698 [WARNING][5682] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0", GenerateName:"calico-kube-controllers-6fb464564c-", Namespace:"calico-system", SelfLink:"", UID:"8a2d3aef-c6d0-496a-87a4-07be591e7fdd", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb464564c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1", Pod:"calico-kube-controllers-6fb464564c-vrn29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4584102755e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.699 [INFO][5682] k8s.go 608: Cleaning up netns ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.699 [INFO][5682] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" iface="eth0" netns="" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.699 [INFO][5682] k8s.go 615: Releasing IP address(es) ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.699 [INFO][5682] utils.go 188: Calico CNI releasing IP address ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.735 [INFO][5689] ipam_plugin.go 417: Releasing address using handleID ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.735 [INFO][5689] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.735 [INFO][5689] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.748 [WARNING][5689] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.748 [INFO][5689] ipam_plugin.go 445: Releasing address using workloadID ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.751 [INFO][5689] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:39.757422 containerd[2022]: 2024-10-08 19:31:39.754 [INFO][5682] k8s.go 621: Teardown processing complete. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.758729 containerd[2022]: time="2024-10-08T19:31:39.757481375Z" level=info msg="TearDown network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" successfully" Oct 8 19:31:39.758729 containerd[2022]: time="2024-10-08T19:31:39.757520435Z" level=info msg="StopPodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" returns successfully" Oct 8 19:31:39.758729 containerd[2022]: time="2024-10-08T19:31:39.758334299Z" level=info msg="RemovePodSandbox for \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\"" Oct 8 19:31:39.758729 containerd[2022]: time="2024-10-08T19:31:39.758382911Z" level=info msg="Forcibly stopping sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\"" Oct 8 19:31:39.785996 sshd[5668]: Accepted publickey for core from 139.178.68.195 port 51522 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:39.790647 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:39.801609 systemd-logind[1990]: New session 12 of user core. Oct 8 19:31:39.809502 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.837 [WARNING][5708] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0", GenerateName:"calico-kube-controllers-6fb464564c-", Namespace:"calico-system", SelfLink:"", UID:"8a2d3aef-c6d0-496a-87a4-07be591e7fdd", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 31, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb464564c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"1b0126107cad872c6d7a51cc8d4218350a6932347136f321ce6dc276551413e1", Pod:"calico-kube-controllers-6fb464564c-vrn29", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4584102755e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.838 [INFO][5708] k8s.go 608: Cleaning up netns ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.838 [INFO][5708] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" iface="eth0" netns="" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.838 [INFO][5708] k8s.go 615: Releasing IP address(es) ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.838 [INFO][5708] utils.go 188: Calico CNI releasing IP address ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.877 [INFO][5715] ipam_plugin.go 417: Releasing address using handleID ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.878 [INFO][5715] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.878 [INFO][5715] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.891 [WARNING][5715] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.891 [INFO][5715] ipam_plugin.go 445: Releasing address using workloadID ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" HandleID="k8s-pod-network.19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Workload="ip--172--31--26--181-k8s-calico--kube--controllers--6fb464564c--vrn29-eth0" Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.894 [INFO][5715] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:31:39.899046 containerd[2022]: 2024-10-08 19:31:39.896 [INFO][5708] k8s.go 621: Teardown processing complete. ContainerID="19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728" Oct 8 19:31:39.900284 containerd[2022]: time="2024-10-08T19:31:39.899103263Z" level=info msg="TearDown network for sandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" successfully" Oct 8 19:31:39.904403 containerd[2022]: time="2024-10-08T19:31:39.904327427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:31:39.904559 containerd[2022]: time="2024-10-08T19:31:39.904441955Z" level=info msg="RemovePodSandbox \"19c8f110a1d61f45e110fabaf8cf198aa300bab0bcf2889fd73ac841d912d728\" returns successfully" Oct 8 19:31:40.134701 sshd[5668]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:40.146406 systemd[1]: sshd@11-172.31.26.181:22-139.178.68.195:51522.service: Deactivated successfully. Oct 8 19:31:40.155746 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:31:40.158764 systemd-logind[1990]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:31:40.190426 systemd[1]: Started sshd@12-172.31.26.181:22-139.178.68.195:51532.service - OpenSSH per-connection server daemon (139.178.68.195:51532). Oct 8 19:31:40.195732 systemd-logind[1990]: Removed session 12. Oct 8 19:31:40.376815 sshd[5729]: Accepted publickey for core from 139.178.68.195 port 51532 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:40.379384 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:40.388272 systemd-logind[1990]: New session 13 of user core. Oct 8 19:31:40.394495 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:31:40.634781 sshd[5729]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:40.642548 systemd[1]: sshd@12-172.31.26.181:22-139.178.68.195:51532.service: Deactivated successfully. Oct 8 19:31:40.647505 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:31:40.650642 systemd-logind[1990]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:31:40.653260 systemd-logind[1990]: Removed session 13. Oct 8 19:31:45.677732 systemd[1]: Started sshd@13-172.31.26.181:22-139.178.68.195:51294.service - OpenSSH per-connection server daemon (139.178.68.195:51294). Oct 8 19:31:45.851901 sshd[5764]: Accepted publickey for core from 139.178.68.195 port 51294 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:45.854608 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:45.862314 systemd-logind[1990]: New session 14 of user core. Oct 8 19:31:45.869443 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:31:46.114428 sshd[5764]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:46.120903 systemd[1]: sshd@13-172.31.26.181:22-139.178.68.195:51294.service: Deactivated successfully. Oct 8 19:31:46.125584 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:31:46.127740 systemd-logind[1990]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:31:46.129593 systemd-logind[1990]: Removed session 14. Oct 8 19:31:51.162727 systemd[1]: Started sshd@14-172.31.26.181:22-139.178.68.195:54752.service - OpenSSH per-connection server daemon (139.178.68.195:54752). Oct 8 19:31:51.342055 sshd[5791]: Accepted publickey for core from 139.178.68.195 port 54752 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:51.344752 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:51.352168 systemd-logind[1990]: New session 15 of user core. Oct 8 19:31:51.362491 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:31:51.615808 sshd[5791]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:51.621410 systemd[1]: sshd@14-172.31.26.181:22-139.178.68.195:54752.service: Deactivated successfully. Oct 8 19:31:51.625922 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:31:51.628852 systemd-logind[1990]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:31:51.631509 systemd-logind[1990]: Removed session 15. Oct 8 19:31:56.656830 systemd[1]: Started sshd@15-172.31.26.181:22-139.178.68.195:54758.service - OpenSSH per-connection server daemon (139.178.68.195:54758). Oct 8 19:31:56.835985 sshd[5807]: Accepted publickey for core from 139.178.68.195 port 54758 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:56.838735 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:56.847547 systemd-logind[1990]: New session 16 of user core. Oct 8 19:31:56.854461 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:31:57.094484 sshd[5807]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:57.100143 systemd[1]: sshd@15-172.31.26.181:22-139.178.68.195:54758.service: Deactivated successfully. Oct 8 19:31:57.104770 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:31:57.108297 systemd-logind[1990]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:31:57.111005 systemd-logind[1990]: Removed session 16. Oct 8 19:32:02.134945 systemd[1]: Started sshd@16-172.31.26.181:22-139.178.68.195:41460.service - OpenSSH per-connection server daemon (139.178.68.195:41460). Oct 8 19:32:02.326093 sshd[5827]: Accepted publickey for core from 139.178.68.195 port 41460 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:02.331900 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:02.345340 systemd-logind[1990]: New session 17 of user core. Oct 8 19:32:02.353747 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:32:02.676303 sshd[5827]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:02.686269 systemd[1]: sshd@16-172.31.26.181:22-139.178.68.195:41460.service: Deactivated successfully. Oct 8 19:32:02.695351 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:32:02.702229 systemd-logind[1990]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:32:02.704164 systemd-logind[1990]: Removed session 17. Oct 8 19:32:03.431728 kubelet[3248]: I1008 19:32:03.430748 3248 topology_manager.go:215] "Topology Admit Handler" podUID="9f036220-3c2e-4229-811a-775ac375ed86" podNamespace="calico-apiserver" podName="calico-apiserver-7545447874-2g9nt" Oct 8 19:32:03.453678 systemd[1]: Created slice kubepods-besteffort-pod9f036220_3c2e_4229_811a_775ac375ed86.slice - libcontainer container kubepods-besteffort-pod9f036220_3c2e_4229_811a_775ac375ed86.slice. Oct 8 19:32:03.538119 kubelet[3248]: I1008 19:32:03.538048 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f036220-3c2e-4229-811a-775ac375ed86-calico-apiserver-certs\") pod \"calico-apiserver-7545447874-2g9nt\" (UID: \"9f036220-3c2e-4229-811a-775ac375ed86\") " pod="calico-apiserver/calico-apiserver-7545447874-2g9nt" Oct 8 19:32:03.538393 kubelet[3248]: I1008 19:32:03.538137 3248 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbgk\" (UniqueName: \"kubernetes.io/projected/9f036220-3c2e-4229-811a-775ac375ed86-kube-api-access-ffbgk\") pod \"calico-apiserver-7545447874-2g9nt\" (UID: \"9f036220-3c2e-4229-811a-775ac375ed86\") " pod="calico-apiserver/calico-apiserver-7545447874-2g9nt" Oct 8 19:32:03.639908 kubelet[3248]: E1008 19:32:03.639575 3248 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:32:03.639908 kubelet[3248]: E1008 19:32:03.639734 3248 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f036220-3c2e-4229-811a-775ac375ed86-calico-apiserver-certs podName:9f036220-3c2e-4229-811a-775ac375ed86 nodeName:}" failed. No retries permitted until 2024-10-08 19:32:04.139703377 +0000 UTC m=+85.748341270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9f036220-3c2e-4229-811a-775ac375ed86-calico-apiserver-certs") pod "calico-apiserver-7545447874-2g9nt" (UID: "9f036220-3c2e-4229-811a-775ac375ed86") : secret "calico-apiserver-certs" not found Oct 8 19:32:04.362364 containerd[2022]: time="2024-10-08T19:32:04.362274789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7545447874-2g9nt,Uid:9f036220-3c2e-4229-811a-775ac375ed86,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:32:04.612058 systemd-networkd[1845]: cali3a07d9f7230: Link UP Oct 8 19:32:04.615892 systemd-networkd[1845]: cali3a07d9f7230: Gained carrier Oct 8 19:32:04.625440 (udev-worker)[5864]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.462 [INFO][5850] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0 calico-apiserver-7545447874- calico-apiserver 9f036220-3c2e-4229-811a-775ac375ed86 1042 0 2024-10-08 19:32:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7545447874 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-181 calico-apiserver-7545447874-2g9nt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a07d9f7230 [] []}} ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.463 [INFO][5850] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.526 [INFO][5856] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" HandleID="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Workload="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.545 [INFO][5856] ipam_plugin.go 270: Auto assigning IP ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" HandleID="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Workload="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-181", "pod":"calico-apiserver-7545447874-2g9nt", "timestamp":"2024-10-08 19:32:04.526331386 +0000 UTC"}, Hostname:"ip-172-31-26-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.545 [INFO][5856] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.545 [INFO][5856] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.545 [INFO][5856] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-181' Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.548 [INFO][5856] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.557 [INFO][5856] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.565 [INFO][5856] ipam.go 489: Trying affinity for 192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.569 [INFO][5856] ipam.go 155: Attempting to load block cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.580 [INFO][5856] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.580 [INFO][5856] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.583 [INFO][5856] ipam.go 1685: Creating new handle: k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7 Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.590 [INFO][5856] ipam.go 1203: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.601 [INFO][5856] ipam.go 1216: Successfully claimed IPs: [192.168.84.5/26] block=192.168.84.0/26 handle="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.601 [INFO][5856] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.84.5/26] handle="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" host="ip-172-31-26-181" Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.601 [INFO][5856] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:32:04.659438 containerd[2022]: 2024-10-08 19:32:04.601 [INFO][5856] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.84.5/26] IPv6=[] ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" HandleID="k8s-pod-network.ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Workload="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.663441 containerd[2022]: 2024-10-08 19:32:04.606 [INFO][5850] k8s.go 386: Populated endpoint ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0", GenerateName:"calico-apiserver-7545447874-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f036220-3c2e-4229-811a-775ac375ed86", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7545447874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"", Pod:"calico-apiserver-7545447874-2g9nt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a07d9f7230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:32:04.663441 containerd[2022]: 2024-10-08 19:32:04.606 [INFO][5850] k8s.go 387: Calico CNI using IPs: [192.168.84.5/32] ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.663441 containerd[2022]: 2024-10-08 19:32:04.606 [INFO][5850] dataplane_linux.go 68: Setting the host side veth name to cali3a07d9f7230 ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.663441 containerd[2022]: 2024-10-08 19:32:04.615 [INFO][5850] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.663441 containerd[2022]: 2024-10-08 19:32:04.616 [INFO][5850] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0", GenerateName:"calico-apiserver-7545447874-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f036220-3c2e-4229-811a-775ac375ed86", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7545447874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-181", ContainerID:"ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7", Pod:"calico-apiserver-7545447874-2g9nt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a07d9f7230", MAC:"72:0b:99:2f:95:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:32:04.663441 containerd[2022]: 2024-10-08 19:32:04.642 [INFO][5850] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7" Namespace="calico-apiserver" Pod="calico-apiserver-7545447874-2g9nt" WorkloadEndpoint="ip--172--31--26--181-k8s-calico--apiserver--7545447874--2g9nt-eth0" Oct 8 19:32:04.712101 containerd[2022]: time="2024-10-08T19:32:04.711836243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:04.714319 containerd[2022]: time="2024-10-08T19:32:04.713910419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:04.714319 containerd[2022]: time="2024-10-08T19:32:04.713973227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:04.714319 containerd[2022]: time="2024-10-08T19:32:04.714018011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:04.782064 systemd[1]: run-containerd-runc-k8s.io-ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7-runc.NJA2UC.mount: Deactivated successfully. Oct 8 19:32:04.797527 systemd[1]: Started cri-containerd-ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7.scope - libcontainer container ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7. Oct 8 19:32:04.878966 containerd[2022]: time="2024-10-08T19:32:04.878711123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7545447874-2g9nt,Uid:9f036220-3c2e-4229-811a-775ac375ed86,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7\"" Oct 8 19:32:04.885429 containerd[2022]: time="2024-10-08T19:32:04.884978291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:32:06.490628 systemd-networkd[1845]: cali3a07d9f7230: Gained IPv6LL Oct 8 19:32:07.501940 containerd[2022]: time="2024-10-08T19:32:07.501788820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:07.504061 containerd[2022]: time="2024-10-08T19:32:07.504006792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:32:07.506481 containerd[2022]: time="2024-10-08T19:32:07.506417208Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:07.512757 containerd[2022]: time="2024-10-08T19:32:07.512565852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:07.516155 containerd[2022]: time="2024-10-08T19:32:07.515248272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.630141509s" Oct 8 19:32:07.516155 containerd[2022]: time="2024-10-08T19:32:07.515326308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:32:07.531623 containerd[2022]: time="2024-10-08T19:32:07.531412597Z" level=info msg="CreateContainer within sandbox \"ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:32:07.571539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064094098.mount: Deactivated successfully. Oct 8 19:32:07.576255 containerd[2022]: time="2024-10-08T19:32:07.576080629Z" level=info msg="CreateContainer within sandbox \"ed2568ee3b28d15a4c271ebc4034e92993d7af3468ed3b32f7e9cdd40cbd4ea7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"07e0428c9d81844277a495fdef57990adf2a6b1647e8e5812c20d8d2b93f6234\"" Oct 8 19:32:07.579919 containerd[2022]: time="2024-10-08T19:32:07.578477557Z" level=info msg="StartContainer for \"07e0428c9d81844277a495fdef57990adf2a6b1647e8e5812c20d8d2b93f6234\"" Oct 8 19:32:07.644522 systemd[1]: Started cri-containerd-07e0428c9d81844277a495fdef57990adf2a6b1647e8e5812c20d8d2b93f6234.scope - libcontainer container 07e0428c9d81844277a495fdef57990adf2a6b1647e8e5812c20d8d2b93f6234. Oct 8 19:32:07.719167 systemd[1]: Started sshd@17-172.31.26.181:22-139.178.68.195:41464.service - OpenSSH per-connection server daemon (139.178.68.195:41464). Oct 8 19:32:07.826019 containerd[2022]: time="2024-10-08T19:32:07.825494258Z" level=info msg="StartContainer for \"07e0428c9d81844277a495fdef57990adf2a6b1647e8e5812c20d8d2b93f6234\" returns successfully" Oct 8 19:32:07.928348 sshd[5981]: Accepted publickey for core from 139.178.68.195 port 41464 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:07.931167 sshd[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:07.941415 systemd-logind[1990]: New session 18 of user core. Oct 8 19:32:07.949488 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:32:08.265637 sshd[5981]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:08.272684 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:32:08.278669 systemd[1]: sshd@17-172.31.26.181:22-139.178.68.195:41464.service: Deactivated successfully. Oct 8 19:32:08.286354 systemd-logind[1990]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:32:08.312775 systemd[1]: Started sshd@18-172.31.26.181:22-139.178.68.195:41468.service - OpenSSH per-connection server daemon (139.178.68.195:41468). Oct 8 19:32:08.318538 systemd-logind[1990]: Removed session 18. Oct 8 19:32:08.510056 sshd[6010]: Accepted publickey for core from 139.178.68.195 port 41468 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:08.511657 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:08.521267 systemd-logind[1990]: New session 19 of user core. Oct 8 19:32:08.528525 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:32:08.964884 ntpd[1985]: Listen normally on 13 cali3a07d9f7230 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:32:08.969764 ntpd[1985]: 8 Oct 19:32:08 ntpd[1985]: Listen normally on 13 cali3a07d9f7230 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:32:09.090384 sshd[6010]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:09.098699 systemd[1]: sshd@18-172.31.26.181:22-139.178.68.195:41468.service: Deactivated successfully. Oct 8 19:32:09.104696 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:32:09.109484 systemd-logind[1990]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:32:09.140330 systemd[1]: Started sshd@19-172.31.26.181:22-139.178.68.195:41484.service - OpenSSH per-connection server daemon (139.178.68.195:41484). Oct 8 19:32:09.143351 systemd-logind[1990]: Removed session 19. Oct 8 19:32:09.327211 sshd[6024]: Accepted publickey for core from 139.178.68.195 port 41484 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:09.331890 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:09.344339 systemd-logind[1990]: New session 20 of user core. Oct 8 19:32:09.352260 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:32:11.214344 kubelet[3248]: I1008 19:32:11.214247 3248 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7545447874-2g9nt" podStartSLOduration=5.577923037 podStartE2EDuration="8.214222491s" podCreationTimestamp="2024-10-08 19:32:03 +0000 UTC" firstStartedPulling="2024-10-08 19:32:04.883913711 +0000 UTC m=+86.492551592" lastFinishedPulling="2024-10-08 19:32:07.520213153 +0000 UTC m=+89.128851046" observedRunningTime="2024-10-08 19:32:08.258500928 +0000 UTC m=+89.867138857" watchObservedRunningTime="2024-10-08 19:32:11.214222491 +0000 UTC m=+92.822860384" Oct 8 19:32:12.888711 sshd[6024]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:12.899172 systemd[1]: sshd@19-172.31.26.181:22-139.178.68.195:41484.service: Deactivated successfully. Oct 8 19:32:12.909653 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:32:12.910963 systemd[1]: session-20.scope: Consumed 1.027s CPU time. Oct 8 19:32:12.912661 systemd-logind[1990]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:32:12.941723 systemd[1]: Started sshd@20-172.31.26.181:22-139.178.68.195:40302.service - OpenSSH per-connection server daemon (139.178.68.195:40302). Oct 8 19:32:12.943539 systemd-logind[1990]: Removed session 20. Oct 8 19:32:13.124955 sshd[6069]: Accepted publickey for core from 139.178.68.195 port 40302 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:13.127522 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:13.135942 systemd-logind[1990]: New session 21 of user core. Oct 8 19:32:13.143512 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:32:13.772348 sshd[6069]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:13.779181 systemd[1]: sshd@20-172.31.26.181:22-139.178.68.195:40302.service: Deactivated successfully. Oct 8 19:32:13.786037 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:32:13.790332 systemd-logind[1990]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:32:13.793156 systemd-logind[1990]: Removed session 21. Oct 8 19:32:13.814859 systemd[1]: Started sshd@21-172.31.26.181:22-139.178.68.195:40316.service - OpenSSH per-connection server daemon (139.178.68.195:40316). Oct 8 19:32:14.001634 sshd[6080]: Accepted publickey for core from 139.178.68.195 port 40316 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:14.004367 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:14.013264 systemd-logind[1990]: New session 22 of user core. Oct 8 19:32:14.021632 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:32:14.309622 sshd[6080]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:14.318504 systemd[1]: sshd@21-172.31.26.181:22-139.178.68.195:40316.service: Deactivated successfully. Oct 8 19:32:14.325099 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:32:14.327950 systemd-logind[1990]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:32:14.330498 systemd-logind[1990]: Removed session 22. Oct 8 19:32:19.352729 systemd[1]: Started sshd@22-172.31.26.181:22-139.178.68.195:40330.service - OpenSSH per-connection server daemon (139.178.68.195:40330). Oct 8 19:32:19.530482 sshd[6100]: Accepted publickey for core from 139.178.68.195 port 40330 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:19.534025 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:19.548602 systemd-logind[1990]: New session 23 of user core. Oct 8 19:32:19.555467 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:32:19.807331 sshd[6100]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:19.812371 systemd[1]: sshd@22-172.31.26.181:22-139.178.68.195:40330.service: Deactivated successfully. Oct 8 19:32:19.817854 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:32:19.821990 systemd-logind[1990]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:32:19.824425 systemd-logind[1990]: Removed session 23. Oct 8 19:32:24.845723 systemd[1]: Started sshd@23-172.31.26.181:22-139.178.68.195:34542.service - OpenSSH per-connection server daemon (139.178.68.195:34542). Oct 8 19:32:25.018112 sshd[6123]: Accepted publickey for core from 139.178.68.195 port 34542 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:25.020752 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:25.028634 systemd-logind[1990]: New session 24 of user core. Oct 8 19:32:25.038476 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:32:25.275882 sshd[6123]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:25.281533 systemd[1]: sshd@23-172.31.26.181:22-139.178.68.195:34542.service: Deactivated successfully. Oct 8 19:32:25.281631 systemd-logind[1990]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:32:25.286166 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:32:25.290619 systemd-logind[1990]: Removed session 24. Oct 8 19:32:30.314694 systemd[1]: Started sshd@24-172.31.26.181:22-139.178.68.195:34546.service - OpenSSH per-connection server daemon (139.178.68.195:34546). Oct 8 19:32:30.492789 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 34546 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:30.495474 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:30.503288 systemd-logind[1990]: New session 25 of user core. Oct 8 19:32:30.513493 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:32:30.762558 sshd[6142]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:30.769315 systemd[1]: sshd@24-172.31.26.181:22-139.178.68.195:34546.service: Deactivated successfully. Oct 8 19:32:30.773748 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:32:30.775072 systemd-logind[1990]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:32:30.777655 systemd-logind[1990]: Removed session 25. Oct 8 19:32:35.807704 systemd[1]: Started sshd@25-172.31.26.181:22-139.178.68.195:32838.service - OpenSSH per-connection server daemon (139.178.68.195:32838). Oct 8 19:32:35.994150 sshd[6155]: Accepted publickey for core from 139.178.68.195 port 32838 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:35.996746 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:36.006753 systemd-logind[1990]: New session 26 of user core. Oct 8 19:32:36.011479 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:32:36.268290 sshd[6155]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:36.273009 systemd-logind[1990]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:32:36.274060 systemd[1]: sshd@25-172.31.26.181:22-139.178.68.195:32838.service: Deactivated successfully. Oct 8 19:32:36.278034 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:32:36.285312 systemd-logind[1990]: Removed session 26. Oct 8 19:32:41.311088 systemd[1]: Started sshd@26-172.31.26.181:22-139.178.68.195:60028.service - OpenSSH per-connection server daemon (139.178.68.195:60028). Oct 8 19:32:41.493757 sshd[6216]: Accepted publickey for core from 139.178.68.195 port 60028 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:41.496393 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:41.503735 systemd-logind[1990]: New session 27 of user core. Oct 8 19:32:41.514478 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:32:41.804250 sshd[6216]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:41.808958 systemd[1]: sshd@26-172.31.26.181:22-139.178.68.195:60028.service: Deactivated successfully. Oct 8 19:32:41.813350 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:32:41.818466 systemd-logind[1990]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:32:41.820476 systemd-logind[1990]: Removed session 27. Oct 8 19:32:46.846740 systemd[1]: Started sshd@27-172.31.26.181:22-139.178.68.195:60044.service - OpenSSH per-connection server daemon (139.178.68.195:60044). Oct 8 19:32:47.015516 sshd[6251]: Accepted publickey for core from 139.178.68.195 port 60044 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:47.018062 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:47.027386 systemd-logind[1990]: New session 28 of user core. Oct 8 19:32:47.031483 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:32:47.266844 sshd[6251]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:47.273610 systemd[1]: sshd@27-172.31.26.181:22-139.178.68.195:60044.service: Deactivated successfully. Oct 8 19:32:47.277536 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:32:47.279646 systemd-logind[1990]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:32:47.282002 systemd-logind[1990]: Removed session 28. Oct 8 19:32:52.307761 systemd[1]: Started sshd@28-172.31.26.181:22-139.178.68.195:47520.service - OpenSSH per-connection server daemon (139.178.68.195:47520). Oct 8 19:32:52.483236 sshd[6277]: Accepted publickey for core from 139.178.68.195 port 47520 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:32:52.485908 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:32:52.493345 systemd-logind[1990]: New session 29 of user core. Oct 8 19:32:52.504461 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 19:32:52.741668 sshd[6277]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:52.748045 systemd[1]: sshd@28-172.31.26.181:22-139.178.68.195:47520.service: Deactivated successfully. Oct 8 19:32:52.752824 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 19:32:52.754786 systemd-logind[1990]: Session 29 logged out. Waiting for processes to exit. Oct 8 19:32:52.757127 systemd-logind[1990]: Removed session 29. Oct 8 19:33:05.840987 systemd[1]: cri-containerd-1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf.scope: Deactivated successfully. Oct 8 19:33:05.842804 systemd[1]: cri-containerd-1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf.scope: Consumed 10.085s CPU time. Oct 8 19:33:05.882408 containerd[2022]: time="2024-10-08T19:33:05.879895738Z" level=info msg="shim disconnected" id=1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf namespace=k8s.io Oct 8 19:33:05.883621 containerd[2022]: time="2024-10-08T19:33:05.882978682Z" level=warning msg="cleaning up after shim disconnected" id=1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf namespace=k8s.io Oct 8 19:33:05.883621 containerd[2022]: time="2024-10-08T19:33:05.883022758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:33:05.883097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf-rootfs.mount: Deactivated successfully. Oct 8 19:33:06.395343 kubelet[3248]: I1008 19:33:06.395120 3248 scope.go:117] "RemoveContainer" containerID="1f4aea10f5f9786815d28a300b802ec51469c67e4122764e370bd693da754daf" Oct 8 19:33:06.404886 containerd[2022]: time="2024-10-08T19:33:06.404812281Z" level=info msg="CreateContainer within sandbox \"8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 19:33:06.435433 containerd[2022]: time="2024-10-08T19:33:06.435377517Z" level=info msg="CreateContainer within sandbox \"8ad2d484ed38243fd6d0f510e0032eeb8f62c30788f83986c0ab133a73444b84\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4777cf38f5b1ecbf030d176bc7d911b6bf44dddd309e9126199e9c2416572baa\"" Oct 8 19:33:06.437709 containerd[2022]: time="2024-10-08T19:33:06.437297829Z" level=info msg="StartContainer for \"4777cf38f5b1ecbf030d176bc7d911b6bf44dddd309e9126199e9c2416572baa\"" Oct 8 19:33:06.439995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821472107.mount: Deactivated successfully. Oct 8 19:33:06.489549 systemd[1]: Started cri-containerd-4777cf38f5b1ecbf030d176bc7d911b6bf44dddd309e9126199e9c2416572baa.scope - libcontainer container 4777cf38f5b1ecbf030d176bc7d911b6bf44dddd309e9126199e9c2416572baa. Oct 8 19:33:06.534834 containerd[2022]: time="2024-10-08T19:33:06.534749518Z" level=info msg="StartContainer for \"4777cf38f5b1ecbf030d176bc7d911b6bf44dddd309e9126199e9c2416572baa\" returns successfully" Oct 8 19:33:07.698158 systemd[1]: cri-containerd-ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61.scope: Deactivated successfully. Oct 8 19:33:07.698687 systemd[1]: cri-containerd-ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61.scope: Consumed 4.890s CPU time, 22.3M memory peak, 0B memory swap peak. Oct 8 19:33:07.738123 containerd[2022]: time="2024-10-08T19:33:07.738023916Z" level=info msg="shim disconnected" id=ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61 namespace=k8s.io Oct 8 19:33:07.738123 containerd[2022]: time="2024-10-08T19:33:07.738103620Z" level=warning msg="cleaning up after shim disconnected" id=ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61 namespace=k8s.io Oct 8 19:33:07.740538 containerd[2022]: time="2024-10-08T19:33:07.738125808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:33:07.744879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61-rootfs.mount: Deactivated successfully. Oct 8 19:33:08.405684 kubelet[3248]: I1008 19:33:08.405584 3248 scope.go:117] "RemoveContainer" containerID="ec6c8032904ce9916307904b83006f60de91b742d042fd1c21b5f1cfc1171a61" Oct 8 19:33:08.409721 containerd[2022]: time="2024-10-08T19:33:08.409648703Z" level=info msg="CreateContainer within sandbox \"9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 19:33:08.439131 containerd[2022]: time="2024-10-08T19:33:08.439055999Z" level=info msg="CreateContainer within sandbox \"9b61d75b0a743f1e614989759a8a1f6dcfa1a3a8aa336b01dc16e3e0e2e678f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f0a1c21c32cff9d3e154b61e0fc1c1dd64192685da349fc5997bad6630871e3e\"" Oct 8 19:33:08.441335 containerd[2022]: time="2024-10-08T19:33:08.440921387Z" level=info msg="StartContainer for \"f0a1c21c32cff9d3e154b61e0fc1c1dd64192685da349fc5997bad6630871e3e\"" Oct 8 19:33:08.442260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148430382.mount: Deactivated successfully. Oct 8 19:33:08.498500 systemd[1]: Started cri-containerd-f0a1c21c32cff9d3e154b61e0fc1c1dd64192685da349fc5997bad6630871e3e.scope - libcontainer container f0a1c21c32cff9d3e154b61e0fc1c1dd64192685da349fc5997bad6630871e3e. Oct 8 19:33:08.569577 containerd[2022]: time="2024-10-08T19:33:08.569379468Z" level=info msg="StartContainer for \"f0a1c21c32cff9d3e154b61e0fc1c1dd64192685da349fc5997bad6630871e3e\" returns successfully" Oct 8 19:33:10.812731 systemd[1]: cri-containerd-f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7.scope: Deactivated successfully. Oct 8 19:33:10.813740 systemd[1]: cri-containerd-f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7.scope: Consumed 3.508s CPU time, 15.9M memory peak, 0B memory swap peak. Oct 8 19:33:10.869841 containerd[2022]: time="2024-10-08T19:33:10.869660391Z" level=info msg="shim disconnected" id=f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7 namespace=k8s.io Oct 8 19:33:10.869841 containerd[2022]: time="2024-10-08T19:33:10.869823411Z" level=warning msg="cleaning up after shim disconnected" id=f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7 namespace=k8s.io Oct 8 19:33:10.871291 containerd[2022]: time="2024-10-08T19:33:10.871241451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:33:10.876014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7-rootfs.mount: Deactivated successfully. Oct 8 19:33:11.421688 kubelet[3248]: I1008 19:33:11.421641 3248 scope.go:117] "RemoveContainer" containerID="f5dcc23c984a02269e796e4d8f1bc5e9175d4289c3393306543966d0bf1227b7" Oct 8 19:33:11.427253 containerd[2022]: time="2024-10-08T19:33:11.426048290Z" level=info msg="CreateContainer within sandbox \"710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 8 19:33:11.460213 containerd[2022]: time="2024-10-08T19:33:11.456987350Z" level=info msg="CreateContainer within sandbox \"710b640f02c82029860f71a2436b5e7434402dce8c5bea9968311bd18198e9d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3b058ae279703e1069d6ef74721ebdd9fb363b5ddf330b8558f9c180a84364e8\"" Oct 8 19:33:11.460920 containerd[2022]: time="2024-10-08T19:33:11.460858214Z" level=info msg="StartContainer for \"3b058ae279703e1069d6ef74721ebdd9fb363b5ddf330b8558f9c180a84364e8\"" Oct 8 19:33:11.530491 systemd[1]: Started cri-containerd-3b058ae279703e1069d6ef74721ebdd9fb363b5ddf330b8558f9c180a84364e8.scope - libcontainer container 3b058ae279703e1069d6ef74721ebdd9fb363b5ddf330b8558f9c180a84364e8. Oct 8 19:33:11.620259 containerd[2022]: time="2024-10-08T19:33:11.618896655Z" level=info msg="StartContainer for \"3b058ae279703e1069d6ef74721ebdd9fb363b5ddf330b8558f9c180a84364e8\" returns successfully" Oct 8 19:33:11.695703 kubelet[3248]: E1008 19:33:11.694637 3248 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-181?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 8 19:33:21.696255 kubelet[3248]: E1008 19:33:21.696034 3248 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"