Dec 13 01:54:44.231934 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:44.231985 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:44.232013 kernel: KASLR disabled due to lack of seed Dec 13 01:54:44.232032 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:44.232049 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:44.232065 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:44.232084 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:44.232157 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:44.232180 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:44.232196 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:44.232221 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:44.232238 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:44.232254 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:44.232270 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:44.232290 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:44.232312 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:44.232330 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:44.232347 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:44.232364 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:44.232381 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:44.232398 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:44.232416 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:44.232433 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:44.232451 kernel: Zone ranges: Dec 13 01:54:44.232468 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:44.232484 kernel: DMA32 empty Dec 13 01:54:44.232507 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:44.232524 kernel: Movable zone start for each node Dec 13 01:54:44.232540 kernel: Early memory node ranges Dec 13 01:54:44.232557 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:44.232574 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:44.232590 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:44.232607 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:44.232624 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:44.232641 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:44.232657 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:44.232674 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:44.232691 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:44.232713 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:44.232730 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:44.232754 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:44.232772 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:44.232790 kernel: psci: Trusted OS migration not required Dec 13 01:54:44.232813 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:44.232831 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:44.232849 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:44.232868 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:44.232885 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:44.232903 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:44.232921 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:44.232939 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:44.232957 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:44.232975 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:44.232992 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:44.233016 kernel: alternatives: applying boot alternatives Dec 13 01:54:44.233036 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:44.233055 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:44.233073 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:44.233091 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:44.233149 kernel: Fallback order for Node 0: 0 Dec 13 01:54:44.233202 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:44.233221 kernel: Policy zone: Normal Dec 13 01:54:44.233240 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:44.233257 kernel: software IO TLB: area num 2. Dec 13 01:54:44.233275 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:44.233303 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:44.233321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:44.233339 kernel: trace event string verifier disabled Dec 13 01:54:44.233357 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:44.233377 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:44.233395 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:44.233413 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:44.233431 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:44.233449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:44.233492 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:44.233511 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:44.233535 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:44.233552 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:44.233570 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:44.233587 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:44.233605 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:44.233622 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:44.233640 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:44.233659 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:44.233677 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:44.233695 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:44.233713 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:44.233731 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:44.233756 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:44.233774 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:44.233793 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:44.233811 kernel: Console: colour dummy device 80x25 Dec 13 01:54:44.233829 kernel: printk: console [tty1] enabled Dec 13 01:54:44.233847 kernel: ACPI: Core revision 20230628 Dec 13 01:54:44.233865 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:44.233883 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:44.233901 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:44.233924 kernel: landlock: Up and running. Dec 13 01:54:44.233942 kernel: SELinux: Initializing. Dec 13 01:54:44.233960 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:44.233978 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:44.233996 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:44.234014 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:44.234032 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:44.234050 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:44.234068 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:44.234091 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:44.234167 kernel: Remapping and enabling EFI services. Dec 13 01:54:44.234189 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:44.234207 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:44.234225 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:44.234244 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:44.234262 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:44.234280 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:44.234298 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:44.234323 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:44.234342 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:44.234360 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:44.234390 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:44.234413 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:44.234432 kernel: devtmpfs: initialized Dec 13 01:54:44.234451 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:44.234469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:44.234487 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:44.234506 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:44.234529 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:44.234548 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:44.234567 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:44.234586 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:44.234605 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:44.234624 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:44.234643 kernel: audit: type=2000 audit(0.297:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:44.234665 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:44.234684 kernel: cpuidle: using governor menu Dec 13 01:54:44.234703 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:44.234721 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:44.234740 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:44.234758 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:44.234777 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:44.234796 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:44.234815 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:44.234839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:44.234859 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:44.234878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:44.234897 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:44.234916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:44.234936 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:44.234955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:44.234974 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:44.234994 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:44.235019 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:44.235040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:44.235060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:44.235080 kernel: ACPI: Interpreter enabled Dec 13 01:54:44.235138 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:44.235166 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:44.235186 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:44.235524 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:44.235781 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:44.236018 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:44.236331 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:44.236576 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:44.236607 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:44.236627 kernel: acpiphp: Slot [1] registered Dec 13 01:54:44.236646 kernel: acpiphp: Slot [2] registered Dec 13 01:54:44.236665 kernel: acpiphp: Slot [3] registered Dec 13 01:54:44.236691 kernel: acpiphp: Slot [4] registered Dec 13 01:54:44.236710 kernel: acpiphp: Slot [5] registered Dec 13 01:54:44.236728 kernel: acpiphp: Slot [6] registered Dec 13 01:54:44.236747 kernel: acpiphp: Slot [7] registered Dec 13 01:54:44.236766 kernel: acpiphp: Slot [8] registered Dec 13 01:54:44.236785 kernel: acpiphp: Slot [9] registered Dec 13 01:54:44.236803 kernel: acpiphp: Slot [10] registered Dec 13 01:54:44.236822 kernel: acpiphp: Slot [11] registered Dec 13 01:54:44.236840 kernel: acpiphp: Slot [12] registered Dec 13 01:54:44.236859 kernel: acpiphp: Slot [13] registered Dec 13 01:54:44.236882 kernel: acpiphp: Slot [14] registered Dec 13 01:54:44.236900 kernel: acpiphp: Slot [15] registered Dec 13 01:54:44.236919 kernel: acpiphp: Slot [16] registered Dec 13 01:54:44.236937 kernel: acpiphp: Slot [17] registered Dec 13 01:54:44.236956 kernel: acpiphp: Slot [18] registered Dec 13 01:54:44.236975 kernel: acpiphp: Slot [19] registered Dec 13 01:54:44.236994 kernel: acpiphp: Slot [20] registered Dec 13 01:54:44.237012 kernel: acpiphp: Slot [21] registered Dec 13 01:54:44.237030 kernel: acpiphp: Slot [22] registered Dec 13 01:54:44.237054 kernel: acpiphp: Slot [23] registered Dec 13 01:54:44.237074 kernel: acpiphp: Slot [24] registered Dec 13 01:54:44.237094 kernel: acpiphp: Slot [25] registered Dec 13 01:54:44.237175 kernel: acpiphp: Slot [26] registered Dec 13 01:54:44.237204 kernel: acpiphp: Slot [27] registered Dec 13 01:54:44.237225 kernel: acpiphp: Slot [28] registered Dec 13 01:54:44.237244 kernel: acpiphp: Slot [29] registered Dec 13 01:54:44.237264 kernel: acpiphp: Slot [30] registered Dec 13 01:54:44.237285 kernel: acpiphp: Slot [31] registered Dec 13 01:54:44.237304 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:44.237638 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:44.237856 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:44.238059 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:44.240351 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:44.240637 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:44.240899 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:44.241211 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:44.241517 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:44.241828 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:44.242236 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:44.242525 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:44.242762 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:44.243007 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:44.243380 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:44.243644 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:44.243878 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:44.246209 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:44.246527 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:44.246780 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:44.247026 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:44.247318 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:44.247541 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:44.247743 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:44.247793 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:44.247822 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:44.247843 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:44.247863 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:44.247883 kernel: iommu: Default domain type: Translated Dec 13 01:54:44.247914 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:44.247934 kernel: efivars: Registered efivars operations Dec 13 01:54:44.247953 kernel: vgaarb: loaded Dec 13 01:54:44.247972 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:44.247991 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:44.248010 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:44.248030 kernel: pnp: PnP ACPI init Dec 13 01:54:44.248348 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:44.248392 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:44.248413 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:44.248432 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:44.248451 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:44.248470 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:44.248489 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:44.248508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:44.248528 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:44.248547 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:44.248571 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:44.248590 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:44.248609 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:44.248627 kernel: kvm [1]: HYP mode not available Dec 13 01:54:44.248646 kernel: Initialise system trusted keyrings Dec 13 01:54:44.248665 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:44.248684 kernel: Key type asymmetric registered Dec 13 01:54:44.248702 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:44.248720 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:44.248744 kernel: io scheduler mq-deadline registered Dec 13 01:54:44.248762 kernel: io scheduler kyber registered Dec 13 01:54:44.248782 kernel: io scheduler bfq registered Dec 13 01:54:44.249032 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:44.249064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:44.249083 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:44.250229 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:44.250283 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:44.250317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:44.250338 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:44.250646 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:44.250682 kernel: printk: console [ttyS0] disabled Dec 13 01:54:44.250703 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:44.250724 kernel: printk: console [ttyS0] enabled Dec 13 01:54:44.250743 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:44.250762 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:44.250782 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:44.250812 kernel: nicpf, ver 1.0 Dec 13 01:54:44.250831 kernel: nicvf, ver 1.0 Dec 13 01:54:44.253928 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:44.254448 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:43 UTC (1734054883) Dec 13 01:54:44.254486 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:44.254506 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:44.254526 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:44.254545 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:44.254575 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:44.254595 kernel: Segment Routing with IPv6 Dec 13 01:54:44.254614 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:44.254632 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:44.254652 kernel: Key type dns_resolver registered Dec 13 01:54:44.254671 kernel: registered taskstats version 1 Dec 13 01:54:44.254690 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:44.254710 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:44.254730 kernel: Key type .fscrypt registered Dec 13 01:54:44.254753 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:44.254772 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:44.254791 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:44.254812 kernel: ima: No architecture policies found Dec 13 01:54:44.254831 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:44.254850 kernel: clk: Disabling unused clocks Dec 13 01:54:44.254870 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:44.254889 kernel: Run /init as init process Dec 13 01:54:44.254908 kernel: with arguments: Dec 13 01:54:44.254928 kernel: /init Dec 13 01:54:44.254952 kernel: with environment: Dec 13 01:54:44.254971 kernel: HOME=/ Dec 13 01:54:44.254990 kernel: TERM=linux Dec 13 01:54:44.255009 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:44.255035 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:44.255062 systemd[1]: Detected virtualization amazon. Dec 13 01:54:44.255084 systemd[1]: Detected architecture arm64. Dec 13 01:54:44.257207 systemd[1]: Running in initrd. Dec 13 01:54:44.257241 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:44.257262 systemd[1]: Hostname set to . Dec 13 01:54:44.257285 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:44.257307 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:44.257329 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:44.257350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:44.257374 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:44.257403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:44.257424 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:44.257446 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:44.257495 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:44.257519 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:44.257541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:44.257562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:44.257590 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:44.257611 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:44.257631 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:44.257652 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:44.257672 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:44.257693 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:44.257714 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:44.257734 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:44.257755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:44.257781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:44.257802 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:44.257824 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:44.257845 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:44.257866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:44.257887 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:44.257908 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:44.257930 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:44.257956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:44.257978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:44.257999 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:44.258021 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:44.258133 systemd-journald[250]: Collecting audit messages is disabled. Dec 13 01:54:44.258199 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:44.258223 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:44.258245 systemd-journald[250]: Journal started Dec 13 01:54:44.258291 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2ecfd8810c16ae8f9661861fb42bbd) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:44.240066 systemd-modules-load[251]: Inserted module 'overlay' Dec 13 01:54:44.268188 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:44.274156 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:44.279530 systemd-modules-load[251]: Inserted module 'br_netfilter' Dec 13 01:54:44.282171 kernel: Bridge firewalling registered Dec 13 01:54:44.282686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:44.299743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:44.317743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:44.324496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:44.330270 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:44.347389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:44.356768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:44.362996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:44.368600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:44.399690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:44.413327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:44.440904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:44.467495 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:44.486200 systemd-resolved[278]: Positive Trust Anchors: Dec 13 01:54:44.487046 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:44.487792 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:44.521914 dracut-cmdline[289]: dracut-dracut-053 Dec 13 01:54:44.529979 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:44.690144 kernel: SCSI subsystem initialized Dec 13 01:54:44.696134 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:44.710145 kernel: iscsi: registered transport (tcp) Dec 13 01:54:44.723136 kernel: random: crng init done Dec 13 01:54:44.723516 systemd-resolved[278]: Defaulting to hostname 'linux'. Dec 13 01:54:44.728735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:44.734344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:44.755016 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:44.755179 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:44.874613 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:44.884573 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:44.936164 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:44.936244 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:44.936271 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:45.009187 kernel: raid6: neonx8 gen() 6596 MB/s Dec 13 01:54:45.026234 kernel: raid6: neonx4 gen() 6328 MB/s Dec 13 01:54:45.043168 kernel: raid6: neonx2 gen() 5292 MB/s Dec 13 01:54:45.060170 kernel: raid6: neonx1 gen() 3877 MB/s Dec 13 01:54:45.077160 kernel: raid6: int64x8 gen() 3747 MB/s Dec 13 01:54:45.094198 kernel: raid6: int64x4 gen() 3676 MB/s Dec 13 01:54:45.111156 kernel: raid6: int64x2 gen() 3565 MB/s Dec 13 01:54:45.128955 kernel: raid6: int64x1 gen() 2774 MB/s Dec 13 01:54:45.129051 kernel: raid6: using algorithm neonx8 gen() 6596 MB/s Dec 13 01:54:45.146919 kernel: raid6: .... xor() 4916 MB/s, rmw enabled Dec 13 01:54:45.146992 kernel: raid6: using neon recovery algorithm Dec 13 01:54:45.155628 kernel: xor: measuring software checksum speed Dec 13 01:54:45.155701 kernel: 8regs : 11032 MB/sec Dec 13 01:54:45.156744 kernel: 32regs : 11969 MB/sec Dec 13 01:54:45.157978 kernel: arm64_neon : 9516 MB/sec Dec 13 01:54:45.158031 kernel: xor: using function: 32regs (11969 MB/sec) Dec 13 01:54:45.242151 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:45.263433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:45.273430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:45.319860 systemd-udevd[470]: Using default interface naming scheme 'v255'. Dec 13 01:54:45.328894 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:45.344054 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:45.380197 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Dec 13 01:54:45.438927 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:45.455431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:45.572369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:45.586410 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:45.627781 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:45.633243 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:45.636227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:45.640505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:45.668455 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:45.705239 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:45.772784 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:45.772853 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:45.787054 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:45.787481 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:45.787789 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:68:b9:43:9a:cb Dec 13 01:54:45.807714 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:45.825945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:45.828092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:45.846217 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:45.853732 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:45.854273 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:45.853200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:45.853530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:45.857734 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:45.868145 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:45.873988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:45.884996 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:45.885071 kernel: GPT:9289727 != 16777215 Dec 13 01:54:45.885117 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:45.885148 kernel: GPT:9289727 != 16777215 Dec 13 01:54:45.885173 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:45.886140 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:45.913455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:45.922438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:45.982311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:46.032141 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (528) Dec 13 01:54:46.045945 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:46.079176 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (526) Dec 13 01:54:46.118827 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:46.169556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:46.185646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:46.188435 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:46.208452 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:46.226146 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:46.226487 disk-uuid[661]: Primary Header is updated. Dec 13 01:54:46.226487 disk-uuid[661]: Secondary Entries is updated. Dec 13 01:54:46.226487 disk-uuid[661]: Secondary Header is updated. Dec 13 01:54:46.245162 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:47.258135 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:47.260245 disk-uuid[663]: The operation has completed successfully. Dec 13 01:54:47.472956 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:47.475699 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:47.518397 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:47.528290 sh[1006]: Success Dec 13 01:54:47.554514 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:54:47.670228 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:47.678349 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:47.686022 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:47.728819 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:54:47.728909 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:47.728938 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:47.730548 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:47.731751 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:47.786155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:47.800205 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:47.804316 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:54:47.812423 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:47.826383 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:47.866717 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:47.866863 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:47.868758 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:47.885173 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:47.902878 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:47.907175 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:47.919550 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:47.929523 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:48.008051 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:48.019458 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:48.083909 systemd-networkd[1198]: lo: Link UP Dec 13 01:54:48.083933 systemd-networkd[1198]: lo: Gained carrier Dec 13 01:54:48.088978 systemd-networkd[1198]: Enumeration completed Dec 13 01:54:48.089726 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:48.095011 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:48.095029 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:48.095182 systemd[1]: Reached target network.target - Network. Dec 13 01:54:48.106595 systemd-networkd[1198]: eth0: Link UP Dec 13 01:54:48.106602 systemd-networkd[1198]: eth0: Gained carrier Dec 13 01:54:48.106619 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:48.127188 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.19.88/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:48.296846 ignition[1135]: Ignition 2.19.0 Dec 13 01:54:48.296877 ignition[1135]: Stage: fetch-offline Dec 13 01:54:48.298726 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:48.298758 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:48.302479 ignition[1135]: Ignition finished successfully Dec 13 01:54:48.308670 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:48.327601 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:54:48.352945 ignition[1218]: Ignition 2.19.0 Dec 13 01:54:48.352979 ignition[1218]: Stage: fetch Dec 13 01:54:48.354560 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:48.354601 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:48.354787 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:48.393483 ignition[1218]: PUT result: OK Dec 13 01:54:48.397900 ignition[1218]: parsed url from cmdline: "" Dec 13 01:54:48.397920 ignition[1218]: no config URL provided Dec 13 01:54:48.397936 ignition[1218]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:48.397963 ignition[1218]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:48.397999 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:48.400183 ignition[1218]: PUT result: OK Dec 13 01:54:48.400267 ignition[1218]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:54:48.402711 ignition[1218]: GET result: OK Dec 13 01:54:48.402876 ignition[1218]: parsing config with SHA512: 5172c4d987ccfe4326893d08edab5ccdf573b246bbd7edf7db26c329f7d3b04f787c90390e884b32b1be66b99b9b4c95bfabb3daac25d8b82c63419c503eed5b Dec 13 01:54:48.418916 unknown[1218]: fetched base config from "system" Dec 13 01:54:48.418945 unknown[1218]: fetched base config from "system" Dec 13 01:54:48.423012 ignition[1218]: fetch: fetch complete Dec 13 01:54:48.418961 unknown[1218]: fetched user config from "aws" Dec 13 01:54:48.423027 ignition[1218]: fetch: fetch passed Dec 13 01:54:48.423206 ignition[1218]: Ignition finished successfully Dec 13 01:54:48.434132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:54:48.450423 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:48.477053 ignition[1224]: Ignition 2.19.0 Dec 13 01:54:48.477087 ignition[1224]: Stage: kargs Dec 13 01:54:48.478663 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:48.478694 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:48.478866 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:48.481043 ignition[1224]: PUT result: OK Dec 13 01:54:48.491365 ignition[1224]: kargs: kargs passed Dec 13 01:54:48.491491 ignition[1224]: Ignition finished successfully Dec 13 01:54:48.496636 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:48.505481 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:48.539061 ignition[1230]: Ignition 2.19.0 Dec 13 01:54:48.542585 ignition[1230]: Stage: disks Dec 13 01:54:48.543631 ignition[1230]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:48.543668 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:48.543851 ignition[1230]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:48.546172 ignition[1230]: PUT result: OK Dec 13 01:54:48.556814 ignition[1230]: disks: disks passed Dec 13 01:54:48.556946 ignition[1230]: Ignition finished successfully Dec 13 01:54:48.562375 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:48.567067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:48.572176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:48.574766 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:48.578910 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:48.585136 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:48.595423 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:48.649353 systemd-fsck[1238]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:54:48.656179 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:48.672436 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:48.750127 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:48.750961 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:48.754856 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:48.782418 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:48.789331 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:48.793210 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:48.793302 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:48.793354 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:48.815390 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:48.829507 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1257) Dec 13 01:54:48.829552 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:48.829579 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:48.829606 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:48.833473 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:48.847161 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:48.855832 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:49.134753 initrd-setup-root[1281]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:49.145890 initrd-setup-root[1288]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:49.156684 initrd-setup-root[1295]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:49.177307 initrd-setup-root[1302]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:49.491783 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:49.511673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:49.518419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:49.533157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:49.537019 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:49.585505 ignition[1369]: INFO : Ignition 2.19.0 Dec 13 01:54:49.585505 ignition[1369]: INFO : Stage: mount Dec 13 01:54:49.585505 ignition[1369]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.585505 ignition[1369]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.585505 ignition[1369]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.597006 ignition[1369]: INFO : PUT result: OK Dec 13 01:54:49.603330 ignition[1369]: INFO : mount: mount passed Dec 13 01:54:49.605268 ignition[1369]: INFO : Ignition finished successfully Dec 13 01:54:49.611171 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:49.623405 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:49.626552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:49.759616 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:49.798938 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1382) Dec 13 01:54:49.799022 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:49.799053 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:49.801758 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:49.809606 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:49.812784 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:49.857853 ignition[1398]: INFO : Ignition 2.19.0 Dec 13 01:54:49.857853 ignition[1398]: INFO : Stage: files Dec 13 01:54:49.861997 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.861997 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.861997 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.861997 ignition[1398]: INFO : PUT result: OK Dec 13 01:54:49.873026 ignition[1398]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:49.898621 ignition[1398]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:49.898621 ignition[1398]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:49.909240 ignition[1398]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:49.912007 ignition[1398]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:49.914812 unknown[1398]: wrote ssh authorized keys file for user: core Dec 13 01:54:49.917051 ignition[1398]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:49.920146 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:54:49.920146 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:54:49.928321 systemd-networkd[1198]: eth0: Gained IPv6LL Dec 13 01:54:50.024119 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:54:50.177775 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:54:50.181570 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:50.185227 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:50.185227 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:50.193718 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 01:54:50.687808 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:54:51.132270 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:51.132270 ignition[1398]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:54:51.140626 ignition[1398]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:54:51.140626 ignition[1398]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:54:51.140626 ignition[1398]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:54:51.140626 ignition[1398]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:54:51.140626 ignition[1398]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:54:51.157303 ignition[1398]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:51.157303 ignition[1398]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:51.157303 ignition[1398]: INFO : files: files passed Dec 13 01:54:51.157303 ignition[1398]: INFO : Ignition finished successfully Dec 13 01:54:51.159687 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:51.188589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:51.196465 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:51.202679 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:51.202911 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:51.239029 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:51.242576 initrd-setup-root-after-ignition[1431]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:51.242576 initrd-setup-root-after-ignition[1427]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:51.254222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:51.258559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:51.276665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:51.348539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:51.350677 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:51.356608 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:51.360499 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:51.365058 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:51.374516 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:51.419484 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:51.432458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:51.469364 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:51.474068 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:51.478729 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:51.481595 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:51.482481 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:51.485794 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:51.488525 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:51.498925 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:51.502177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:51.509137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:51.512244 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:51.519429 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:51.522540 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:51.526710 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:51.529663 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:51.534560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:51.535000 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:51.543513 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:51.551267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:51.554240 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:51.557494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:51.565217 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:51.565538 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:51.568238 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:51.568639 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:51.571527 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:51.571784 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:51.590945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:51.613311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:51.615472 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:51.615806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:51.620520 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:51.620781 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:51.646976 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:51.648748 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:51.668752 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:51.670784 ignition[1451]: INFO : Ignition 2.19.0 Dec 13 01:54:51.670784 ignition[1451]: INFO : Stage: umount Dec 13 01:54:51.676244 ignition[1451]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:51.676244 ignition[1451]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:51.676244 ignition[1451]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:51.683651 ignition[1451]: INFO : PUT result: OK Dec 13 01:54:51.688759 ignition[1451]: INFO : umount: umount passed Dec 13 01:54:51.688759 ignition[1451]: INFO : Ignition finished successfully Dec 13 01:54:51.694538 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:51.695951 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:51.699684 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:51.699819 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:51.702254 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:51.702372 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:51.704816 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:54:51.704940 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:54:51.710068 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:51.716391 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:51.716529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:51.729590 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:51.732303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:51.736290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:51.738901 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:51.746577 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:51.748637 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:51.748735 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:51.751310 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:51.751691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:51.763144 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:51.763275 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:51.768759 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:51.768884 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:51.773858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:51.780515 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:51.783189 systemd-networkd[1198]: eth0: DHCPv6 lease lost Dec 13 01:54:51.796883 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:51.799748 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:51.808733 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:51.809983 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:51.823601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:51.823768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:51.834351 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:51.840294 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:51.841069 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:51.844955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:51.845077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:51.849294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:51.849801 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:51.858380 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:51.858510 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:51.861165 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:51.865371 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:51.865623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:51.871234 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:51.873352 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:51.907750 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:51.909892 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:51.922932 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:51.925226 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:51.927729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:51.927818 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:51.931566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:51.931932 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:51.936319 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:51.936456 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:51.948163 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:51.948297 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:51.960478 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:51.968215 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:51.968568 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:51.977646 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:54:51.977782 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:51.981013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:51.981169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:51.993006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:51.993213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:51.997015 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:51.997595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:52.019748 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:52.021294 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:52.025739 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:52.045617 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:52.063954 systemd[1]: Switching root. Dec 13 01:54:52.111824 systemd-journald[250]: Journal stopped Dec 13 01:54:54.404488 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Dec 13 01:54:54.404654 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:54:54.404717 kernel: SELinux: policy capability open_perms=1 Dec 13 01:54:54.404756 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:54:54.404799 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:54:54.404830 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:54:54.404882 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:54:54.404915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:54:54.404950 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:54:54.404986 kernel: audit: type=1403 audit(1734054892.502:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:54:54.405049 systemd[1]: Successfully loaded SELinux policy in 75.549ms. Dec 13 01:54:54.407692 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.468ms. Dec 13 01:54:54.407785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:54.407822 systemd[1]: Detected virtualization amazon. Dec 13 01:54:54.407858 systemd[1]: Detected architecture arm64. Dec 13 01:54:54.407902 systemd[1]: Detected first boot. Dec 13 01:54:54.407941 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:54.407978 zram_generator::config[1494]: No configuration found. Dec 13 01:54:54.408015 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:54:54.408047 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:54:54.408080 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:54:54.408218 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:54:54.408258 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:54:54.408304 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:54:54.408350 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:54:54.408381 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:54:54.408413 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:54:54.408447 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:54:54.408482 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:54:54.408516 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:54:54.408555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:54.408594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:54.408627 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:54:54.408660 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:54:54.408696 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:54:54.408727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:54.408761 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:54:54.408792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:54.408826 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:54:54.408863 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:54:54.408902 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:54.408934 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:54:54.408971 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:54.409010 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:54.409043 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:54.409078 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:54.409216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:54:54.409262 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:54:54.409307 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:54.409341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:54.409372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:54.409431 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:54:54.409466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:54:54.409497 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:54:54.409534 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:54:54.409573 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:54:54.409607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:54:54.409652 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:54:54.409691 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:54:54.409725 systemd[1]: Reached target machines.target - Containers. Dec 13 01:54:54.409771 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:54:54.409803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:54.409840 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:54.409872 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:54:54.409906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:54.409947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:54.409979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:54.410010 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:54:54.410040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:54.410083 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:54:54.412207 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:54:54.412255 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:54:54.412288 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:54:54.412321 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:54:54.412364 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:54.412395 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:54.412428 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:54:54.412463 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:54:54.412494 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:54.412529 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:54:54.412561 systemd[1]: Stopped verity-setup.service. Dec 13 01:54:54.412595 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:54:54.412626 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:54:54.412668 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:54:54.412701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:54:54.412734 kernel: loop: module loaded Dec 13 01:54:54.412766 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:54:54.412802 kernel: ACPI: bus type drm_connector registered Dec 13 01:54:54.412834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:54:54.412867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:54.412906 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:54:54.412941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:54:54.413035 systemd-journald[1576]: Collecting audit messages is disabled. Dec 13 01:54:54.413148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:54.413204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:54.413254 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:54.413297 systemd-journald[1576]: Journal started Dec 13 01:54:54.413355 systemd-journald[1576]: Runtime Journal (/run/log/journal/ec2ecfd8810c16ae8f9661861fb42bbd) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:54.418857 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:53.794309 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:54:53.855923 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:54:53.856918 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:54:54.442275 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:54.439215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:54.441542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:54.444867 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:54.445427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:54.471169 kernel: fuse: init (API version 7.39) Dec 13 01:54:54.478904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:54:54.479380 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:54:54.483278 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:54:54.497069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:54.504349 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:54:54.508674 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:54:54.522442 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:54:54.536456 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:54:54.541478 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:54:54.541613 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:54.548728 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:54:54.563420 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:54:54.573536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:54:54.576677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:54.590812 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:54:54.598471 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:54:54.601302 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:54.610540 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:54:54.615364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:54.633532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:54.641642 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:54:54.650374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:54.656613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:54:54.659582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:54:54.664230 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:54:54.682063 systemd-journald[1576]: Time spent on flushing to /var/log/journal/ec2ecfd8810c16ae8f9661861fb42bbd is 37.135ms for 907 entries. Dec 13 01:54:54.682063 systemd-journald[1576]: System Journal (/var/log/journal/ec2ecfd8810c16ae8f9661861fb42bbd) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:54:54.740902 systemd-journald[1576]: Received client request to flush runtime journal. Dec 13 01:54:54.733354 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:54:54.753915 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:54:54.758877 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:54:54.778877 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:54:54.782975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:54:54.794566 kernel: loop0: detected capacity change from 0 to 52536 Dec 13 01:54:54.830489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:54.853734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:54:54.858209 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:54:54.858449 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:54:54.876804 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Dec 13 01:54:54.876848 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Dec 13 01:54:54.890950 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:54.902485 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:54:54.919437 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 01:54:54.932265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:54.947025 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:54:54.999864 udevadm[1642]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:54:55.041784 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:54:55.056487 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:54:55.057552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:55.131275 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Dec 13 01:54:55.131331 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Dec 13 01:54:55.143261 kernel: loop3: detected capacity change from 0 to 194096 Dec 13 01:54:55.144198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:55.210228 kernel: loop4: detected capacity change from 0 to 52536 Dec 13 01:54:55.261739 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:54:55.277204 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 01:54:55.295160 kernel: loop7: detected capacity change from 0 to 194096 Dec 13 01:54:55.335354 (sd-merge)[1651]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:54:55.336450 (sd-merge)[1651]: Merged extensions into '/usr'. Dec 13 01:54:55.351698 systemd[1]: Reloading requested from client PID 1622 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:54:55.351736 systemd[1]: Reloading... Dec 13 01:54:55.527191 zram_generator::config[1676]: No configuration found. Dec 13 01:54:56.011877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:56.156345 systemd[1]: Reloading finished in 803 ms. Dec 13 01:54:56.202835 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:54:56.224587 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:56.227954 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:54:56.244339 systemd[1]: Starting ensure-sysext.service... Dec 13 01:54:56.252312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:56.285440 systemd[1]: Reloading requested from client PID 1732 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:54:56.286256 systemd[1]: Reloading... Dec 13 01:54:56.334840 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:54:56.337810 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:54:56.340898 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:54:56.342787 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Dec 13 01:54:56.343657 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Dec 13 01:54:56.356897 systemd-udevd[1730]: Using default interface naming scheme 'v255'. Dec 13 01:54:56.363256 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:56.363277 systemd-tmpfiles[1733]: Skipping /boot Dec 13 01:54:56.383248 ldconfig[1614]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:54:56.401919 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:56.402210 systemd-tmpfiles[1733]: Skipping /boot Dec 13 01:54:56.540866 zram_generator::config[1773]: No configuration found. Dec 13 01:54:56.718834 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:56.730231 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1775) Dec 13 01:54:56.779199 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1775) Dec 13 01:54:56.911790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:56.937432 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1768) Dec 13 01:54:57.092951 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:54:57.093206 systemd[1]: Reloading finished in 806 ms. Dec 13 01:54:57.131424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:57.137544 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:54:57.141292 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:57.207410 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:54:57.217525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:54:57.225534 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:54:57.234511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:57.246521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:57.252561 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:54:57.271024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:57.278217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:57.282587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:57.287919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:57.290475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:57.296259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:57.296684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:57.312827 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:54:57.326446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:57.411146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:57.415263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:57.415653 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:54:57.489744 systemd[1]: Finished ensure-sysext.service. Dec 13 01:54:57.496208 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:54:57.500583 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:54:57.507406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:57.509245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:57.513884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:57.515581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:57.521232 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:57.521904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:57.526739 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:57.528353 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:57.539857 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:54:57.595053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:57.595255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:57.605497 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:54:57.613865 augenrules[1963]: No rules Dec 13 01:54:57.615476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:57.617646 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:54:57.619000 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:54:57.633786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:57.639465 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:54:57.660788 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:54:57.673518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:54:57.677895 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:54:57.682199 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:54:57.717884 lvm[1971]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:57.769503 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:54:57.772811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:57.786714 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:54:57.801348 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:54:57.833570 lvm[1984]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:57.896463 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:54:57.928362 systemd-networkd[1926]: lo: Link UP Dec 13 01:54:57.928385 systemd-networkd[1926]: lo: Gained carrier Dec 13 01:54:57.932245 systemd-resolved[1927]: Positive Trust Anchors: Dec 13 01:54:57.932833 systemd-resolved[1927]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:57.932904 systemd-resolved[1927]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:57.933819 systemd-networkd[1926]: Enumeration completed Dec 13 01:54:57.934031 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:57.942924 systemd-networkd[1926]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:57.942955 systemd-networkd[1926]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:57.946467 systemd-networkd[1926]: eth0: Link UP Dec 13 01:54:57.946864 systemd-networkd[1926]: eth0: Gained carrier Dec 13 01:54:57.946900 systemd-networkd[1926]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:57.948521 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:54:57.958163 systemd-resolved[1927]: Defaulting to hostname 'linux'. Dec 13 01:54:57.960306 systemd-networkd[1926]: eth0: DHCPv4 address 172.31.19.88/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:57.963558 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:57.966534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:57.972343 systemd[1]: Reached target network.target - Network. Dec 13 01:54:57.974915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:57.977579 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:57.980033 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:54:57.982486 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:54:57.985306 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:54:57.987693 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:54:57.990267 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:54:57.992748 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:54:57.992816 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:57.994752 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:58.000273 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:54:58.005622 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:54:58.013482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:54:58.016838 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:54:58.019470 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:58.021559 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:58.024072 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:58.024147 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:58.032518 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:54:58.042470 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:54:58.056328 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:54:58.061836 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:54:58.075526 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:54:58.078582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:54:58.083253 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:54:58.100358 jq[1997]: false Dec 13 01:54:58.102168 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:54:58.109573 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:54:58.115688 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:54:58.127924 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:54:58.137754 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:54:58.166815 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:54:58.179419 extend-filesystems[1998]: Found loop4 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found loop5 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found loop6 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found loop7 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p1 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p2 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p3 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found usr Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p4 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p6 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p7 Dec 13 01:54:58.179419 extend-filesystems[1998]: Found nvme0n1p9 Dec 13 01:54:58.179419 extend-filesystems[1998]: Checking size of /dev/nvme0n1p9 Dec 13 01:54:58.282342 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:54:58.170554 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:54:58.282560 extend-filesystems[1998]: Resized partition /dev/nvme0n1p9 Dec 13 01:54:58.172795 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:54:58.290474 extend-filesystems[2018]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:54:58.182692 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:54:58.194418 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:54:58.204771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:54:58.208315 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:54:58.266811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:54:58.338136 dbus-daemon[1996]: [system] SELinux support is enabled Dec 13 01:54:58.338739 jq[2012]: true Dec 13 01:54:58.363212 dbus-daemon[1996]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1926 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:54:58.389003 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:54:58.379424 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:54:58.388644 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:54:58.392924 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:54:58.396323 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:54:58.398544 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:54:58.413250 extend-filesystems[2018]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:54:58.413250 extend-filesystems[2018]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:54:58.413250 extend-filesystems[2018]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:54:58.424403 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: ---------------------------------------------------- Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: corporation. Support and training for ntp-4 are Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: available at https://www.nwtime.org/support Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: ---------------------------------------------------- Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: proto: precision = 0.096 usec (-23) Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: basedate set to 2024-11-30 Dec 13 01:54:58.433865 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: gps base set to 2024-12-01 (week 2343) Dec 13 01:54:58.434563 extend-filesystems[1998]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:54:58.424458 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:54:58.438818 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Listen normally on 3 eth0 172.31.19.88:123 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Listen normally on 4 lo [::1]:123 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: bind(21) AF_INET6 fe80::468:b9ff:fe43:9acb%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: unable to create socket on eth0 (5) for fe80::468:b9ff:fe43:9acb%2#123 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: failed to init interface for address fe80::468:b9ff:fe43:9acb%2 Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:58.449089 ntpd[2000]: 13 Dec 01:54:58 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:58.424479 ntpd[2000]: ---------------------------------------------------- Dec 13 01:54:58.439302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:54:58.479480 tar[2019]: linux-arm64/helm Dec 13 01:54:58.424499 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:54:58.464971 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:54:58.424518 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:54:58.465041 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:54:58.424537 ntpd[2000]: corporation. Support and training for ntp-4 are Dec 13 01:54:58.470948 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:54:58.424556 ntpd[2000]: available at https://www.nwtime.org/support Dec 13 01:54:58.470987 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:54:58.424575 ntpd[2000]: ---------------------------------------------------- Dec 13 01:54:58.427131 ntpd[2000]: proto: precision = 0.096 usec (-23) Dec 13 01:54:58.427619 ntpd[2000]: basedate set to 2024-11-30 Dec 13 01:54:58.427645 ntpd[2000]: gps base set to 2024-12-01 (week 2343) Dec 13 01:54:58.436545 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:54:58.436634 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:54:58.436936 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:54:58.437001 ntpd[2000]: Listen normally on 3 eth0 172.31.19.88:123 Dec 13 01:54:58.437068 ntpd[2000]: Listen normally on 4 lo [::1]:123 Dec 13 01:54:58.437213 ntpd[2000]: bind(21) AF_INET6 fe80::468:b9ff:fe43:9acb%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:54:58.437257 ntpd[2000]: unable to create socket on eth0 (5) for fe80::468:b9ff:fe43:9acb%2#123 Dec 13 01:54:58.437285 ntpd[2000]: failed to init interface for address fe80::468:b9ff:fe43:9acb%2 Dec 13 01:54:58.437350 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Dec 13 01:54:58.440841 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:58.440898 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:58.467178 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:54:58.511919 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:54:58.524992 (ntainerd)[2044]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:54:58.547639 update_engine[2010]: I20241213 01:54:58.547413 2010 main.cc:92] Flatcar Update Engine starting Dec 13 01:54:58.566994 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:54:58.573078 update_engine[2010]: I20241213 01:54:58.572974 2010 update_check_scheduler.cc:74] Next update check in 9m26s Dec 13 01:54:58.587190 jq[2042]: true Dec 13 01:54:58.589497 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:54:58.602195 systemd-logind[2008]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:54:58.602276 systemd-logind[2008]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:54:58.603274 systemd-logind[2008]: New seat seat0. Dec 13 01:54:58.607369 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:54:58.616565 coreos-metadata[1995]: Dec 13 01:54:58.616 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:54:58.618400 coreos-metadata[1995]: Dec 13 01:54:58.618 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:54:58.619550 coreos-metadata[1995]: Dec 13 01:54:58.619 INFO Fetch successful Dec 13 01:54:58.619550 coreos-metadata[1995]: Dec 13 01:54:58.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:54:58.626076 coreos-metadata[1995]: Dec 13 01:54:58.624 INFO Fetch successful Dec 13 01:54:58.626076 coreos-metadata[1995]: Dec 13 01:54:58.624 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:54:58.626496 coreos-metadata[1995]: Dec 13 01:54:58.626 INFO Fetch successful Dec 13 01:54:58.626496 coreos-metadata[1995]: Dec 13 01:54:58.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:54:58.630337 coreos-metadata[1995]: Dec 13 01:54:58.630 INFO Fetch successful Dec 13 01:54:58.630337 coreos-metadata[1995]: Dec 13 01:54:58.630 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:54:58.631920 coreos-metadata[1995]: Dec 13 01:54:58.631 INFO Fetch failed with 404: resource not found Dec 13 01:54:58.631920 coreos-metadata[1995]: Dec 13 01:54:58.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:54:58.637141 coreos-metadata[1995]: Dec 13 01:54:58.634 INFO Fetch successful Dec 13 01:54:58.637141 coreos-metadata[1995]: Dec 13 01:54:58.634 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:54:58.637141 coreos-metadata[1995]: Dec 13 01:54:58.635 INFO Fetch successful Dec 13 01:54:58.637141 coreos-metadata[1995]: Dec 13 01:54:58.635 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:54:58.637860 coreos-metadata[1995]: Dec 13 01:54:58.637 INFO Fetch successful Dec 13 01:54:58.637860 coreos-metadata[1995]: Dec 13 01:54:58.637 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:54:58.638790 coreos-metadata[1995]: Dec 13 01:54:58.638 INFO Fetch successful Dec 13 01:54:58.638790 coreos-metadata[1995]: Dec 13 01:54:58.638 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:54:58.650191 coreos-metadata[1995]: Dec 13 01:54:58.647 INFO Fetch successful Dec 13 01:54:58.685953 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:54:58.719388 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:54:58.723914 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:54:58.728142 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1768) Dec 13 01:54:58.832580 bash[2102]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:54:58.841787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:54:58.882658 systemd[1]: Starting sshkeys.service... Dec 13 01:54:58.942425 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:54:58.981796 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:54:59.177067 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:54:59.178251 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:54:59.186255 dbus-daemon[1996]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2053 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:54:59.271982 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:54:59.315878 containerd[2044]: time="2024-12-13T01:54:59.312552022Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:54:59.317606 coreos-metadata[2126]: Dec 13 01:54:59.317 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:54:59.325126 coreos-metadata[2126]: Dec 13 01:54:59.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:54:59.325126 coreos-metadata[2126]: Dec 13 01:54:59.322 INFO Fetch successful Dec 13 01:54:59.325126 coreos-metadata[2126]: Dec 13 01:54:59.322 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:54:59.325644 polkitd[2157]: Started polkitd version 121 Dec 13 01:54:59.330050 coreos-metadata[2126]: Dec 13 01:54:59.329 INFO Fetch successful Dec 13 01:54:59.333828 unknown[2126]: wrote ssh authorized keys file for user: core Dec 13 01:54:59.353275 polkitd[2157]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:54:59.353437 polkitd[2157]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:54:59.359025 polkitd[2157]: Finished loading, compiling and executing 2 rules Dec 13 01:54:59.361789 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:54:59.362627 polkitd[2157]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:54:59.367667 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:54:59.404907 update-ssh-keys[2175]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:54:59.404843 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:54:59.412949 systemd[1]: Finished sshkeys.service. Dec 13 01:54:59.430183 ntpd[2000]: bind(24) AF_INET6 fe80::468:b9ff:fe43:9acb%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:54:59.430896 ntpd[2000]: 13 Dec 01:54:59 ntpd[2000]: bind(24) AF_INET6 fe80::468:b9ff:fe43:9acb%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:54:59.430896 ntpd[2000]: 13 Dec 01:54:59 ntpd[2000]: unable to create socket on eth0 (6) for fe80::468:b9ff:fe43:9acb%2#123 Dec 13 01:54:59.430896 ntpd[2000]: 13 Dec 01:54:59 ntpd[2000]: failed to init interface for address fe80::468:b9ff:fe43:9acb%2 Dec 13 01:54:59.430248 ntpd[2000]: unable to create socket on eth0 (6) for fe80::468:b9ff:fe43:9acb%2#123 Dec 13 01:54:59.430278 ntpd[2000]: failed to init interface for address fe80::468:b9ff:fe43:9acb%2 Dec 13 01:54:59.431435 systemd-hostnamed[2053]: Hostname set to (transient) Dec 13 01:54:59.431596 systemd-resolved[1927]: System hostname changed to 'ip-172-31-19-88'. Dec 13 01:54:59.464338 locksmithd[2057]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:54:59.483305 containerd[2044]: time="2024-12-13T01:54:59.477959447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.483305 containerd[2044]: time="2024-12-13T01:54:59.482636075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:59.483305 containerd[2044]: time="2024-12-13T01:54:59.482710019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:54:59.483305 containerd[2044]: time="2024-12-13T01:54:59.482756891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:54:59.483305 containerd[2044]: time="2024-12-13T01:54:59.483090815Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484231427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484438991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484473251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484787747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484825043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484856651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.484882595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.485144 containerd[2044]: time="2024-12-13T01:54:59.485054603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.491027 containerd[2044]: time="2024-12-13T01:54:59.489622691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:59.494860 containerd[2044]: time="2024-12-13T01:54:59.494798087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:59.500177 containerd[2044]: time="2024-12-13T01:54:59.499770131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:54:59.500177 containerd[2044]: time="2024-12-13T01:54:59.500018279Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:54:59.500670 containerd[2044]: time="2024-12-13T01:54:59.500382839Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:54:59.513977 containerd[2044]: time="2024-12-13T01:54:59.513921623Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:54:59.514243 containerd[2044]: time="2024-12-13T01:54:59.514187927Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:54:59.515080 containerd[2044]: time="2024-12-13T01:54:59.514396175Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:54:59.515080 containerd[2044]: time="2024-12-13T01:54:59.514470647Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:54:59.515080 containerd[2044]: time="2024-12-13T01:54:59.514519715Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:54:59.515080 containerd[2044]: time="2024-12-13T01:54:59.514919747Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:54:59.520698 containerd[2044]: time="2024-12-13T01:54:59.516618503Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:54:59.521652 containerd[2044]: time="2024-12-13T01:54:59.521607035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:54:59.521809 containerd[2044]: time="2024-12-13T01:54:59.521779607Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:54:59.521970 containerd[2044]: time="2024-12-13T01:54:59.521940971Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:54:59.522126 containerd[2044]: time="2024-12-13T01:54:59.522073691Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522229115Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522267479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522329663Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522388667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522425363Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522481715Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522515963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522596639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522631835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.522810 containerd[2044]: time="2024-12-13T01:54:59.522662207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.523979 containerd[2044]: time="2024-12-13T01:54:59.522694139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.523979 containerd[2044]: time="2024-12-13T01:54:59.523614791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.523979 containerd[2044]: time="2024-12-13T01:54:59.523649711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.523979 containerd[2044]: time="2024-12-13T01:54:59.523679279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.523979 containerd[2044]: time="2024-12-13T01:54:59.523710323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.523979 containerd[2044]: time="2024-12-13T01:54:59.523744967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.525150 containerd[2044]: time="2024-12-13T01:54:59.523782371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.525150 containerd[2044]: time="2024-12-13T01:54:59.524647667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.525150 containerd[2044]: time="2024-12-13T01:54:59.524693327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.525150 containerd[2044]: time="2024-12-13T01:54:59.524726219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.525150 containerd[2044]: time="2024-12-13T01:54:59.524776943Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:54:59.527194 containerd[2044]: time="2024-12-13T01:54:59.524828855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.527194 containerd[2044]: time="2024-12-13T01:54:59.526853399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.526885835Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527508563Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527554967Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527584019Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527628635Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527658827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527699963Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527727143Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:54:59.531033 containerd[2044]: time="2024-12-13T01:54:59.527758871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:54:59.531541 containerd[2044]: time="2024-12-13T01:54:59.528366179Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:54:59.531541 containerd[2044]: time="2024-12-13T01:54:59.528476315Z" level=info msg="Connect containerd service" Dec 13 01:54:59.531541 containerd[2044]: time="2024-12-13T01:54:59.528524075Z" level=info msg="using legacy CRI server" Dec 13 01:54:59.531541 containerd[2044]: time="2024-12-13T01:54:59.528541607Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:54:59.531541 containerd[2044]: time="2024-12-13T01:54:59.528704951Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540420167Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540639551Z" level=info msg="Start subscribing containerd event" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540726575Z" level=info msg="Start recovering state" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540864863Z" level=info msg="Start event monitor" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540890123Z" level=info msg="Start snapshots syncer" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540915011Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540935795Z" level=info msg="Start streaming server" Dec 13 01:54:59.543040 containerd[2044]: time="2024-12-13T01:54:59.540989579Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:54:59.544213 containerd[2044]: time="2024-12-13T01:54:59.544167455Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:54:59.547199 containerd[2044]: time="2024-12-13T01:54:59.544878863Z" level=info msg="containerd successfully booted in 0.248201s" Dec 13 01:54:59.545015 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:54:59.720288 systemd-networkd[1926]: eth0: Gained IPv6LL Dec 13 01:54:59.733256 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:54:59.736946 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:54:59.752715 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:54:59.768521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:59.777643 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:54:59.888870 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:54:59.962157 amazon-ssm-agent[2201]: Initializing new seelog logger Dec 13 01:54:59.962157 amazon-ssm-agent[2201]: New Seelog Logger Creation Complete Dec 13 01:54:59.962157 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.962157 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.962996 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 processing appconfig overrides Dec 13 01:54:59.963559 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.963559 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.963751 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 processing appconfig overrides Dec 13 01:54:59.966485 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.966485 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.967190 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 processing appconfig overrides Dec 13 01:54:59.967696 amazon-ssm-agent[2201]: 2024-12-13 01:54:59 INFO Proxy environment variables: Dec 13 01:54:59.976937 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.976937 amazon-ssm-agent[2201]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:59.977226 amazon-ssm-agent[2201]: 2024/12/13 01:54:59 processing appconfig overrides Dec 13 01:55:00.070944 amazon-ssm-agent[2201]: 2024-12-13 01:54:59 INFO https_proxy: Dec 13 01:55:00.147490 tar[2019]: linux-arm64/LICENSE Dec 13 01:55:00.147490 tar[2019]: linux-arm64/README.md Dec 13 01:55:00.174160 amazon-ssm-agent[2201]: 2024-12-13 01:54:59 INFO http_proxy: Dec 13 01:55:00.202231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:55:00.273152 amazon-ssm-agent[2201]: 2024-12-13 01:54:59 INFO no_proxy: Dec 13 01:55:00.371386 amazon-ssm-agent[2201]: 2024-12-13 01:54:59 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:55:00.470087 amazon-ssm-agent[2201]: 2024-12-13 01:54:59 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:55:00.569385 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO Agent will take identity from EC2 Dec 13 01:55:00.668747 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:00.769148 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:00.864844 sshd_keygen[2054]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:00.869590 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:00.955438 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:00.968806 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:55:00.969628 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:00.980356 systemd[1]: Started sshd@0-172.31.19.88:22-139.178.68.195:44358.service - OpenSSH per-connection server daemon (139.178.68.195:44358). Dec 13 01:55:01.008299 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:01.010041 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:01.022558 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:01.061218 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:01.074962 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:01.077407 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:55:01.092376 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:01.096571 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [Registrar] Starting registrar module Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:01 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:01 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:01 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:55:01.161439 amazon-ssm-agent[2201]: 2024-12-13 01:55:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:55:01.178698 amazon-ssm-agent[2201]: 2024-12-13 01:55:01 INFO [CredentialRefresher] Next credential rotation will be in 32.08331889043333 minutes Dec 13 01:55:01.253024 sshd[2231]: Accepted publickey for core from 139.178.68.195 port 44358 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:01.256726 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:01.284260 systemd-logind[2008]: New session 1 of user core. Dec 13 01:55:01.286732 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:01.296857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:01.334256 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:01.346686 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:01.373707 (systemd)[2242]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:01.556474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:01.560285 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:01.571931 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:01.632195 systemd[2242]: Queued start job for default target default.target. Dec 13 01:55:01.644721 systemd[2242]: Created slice app.slice - User Application Slice. Dec 13 01:55:01.645049 systemd[2242]: Reached target paths.target - Paths. Dec 13 01:55:01.645366 systemd[2242]: Reached target timers.target - Timers. Dec 13 01:55:01.648891 systemd[2242]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:01.690416 systemd[2242]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:01.690565 systemd[2242]: Reached target sockets.target - Sockets. Dec 13 01:55:01.690601 systemd[2242]: Reached target basic.target - Basic System. Dec 13 01:55:01.690704 systemd[2242]: Reached target default.target - Main User Target. Dec 13 01:55:01.690773 systemd[2242]: Startup finished in 302ms. Dec 13 01:55:01.690983 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:01.703495 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:01.709177 systemd[1]: Startup finished in 1.284s (kernel) + 8.674s (initrd) + 9.278s (userspace) = 19.237s. Dec 13 01:55:01.878065 systemd[1]: Started sshd@1-172.31.19.88:22-139.178.68.195:44366.service - OpenSSH per-connection server daemon (139.178.68.195:44366). Dec 13 01:55:02.067819 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 44366 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:02.071538 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:02.085750 systemd-logind[2008]: New session 2 of user core. Dec 13 01:55:02.092433 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:02.197454 amazon-ssm-agent[2201]: 2024-12-13 01:55:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:02.231401 sshd[2267]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:02.242962 systemd[1]: sshd@1-172.31.19.88:22-139.178.68.195:44366.service: Deactivated successfully. Dec 13 01:55:02.252416 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:02.254611 systemd-logind[2008]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:02.277626 systemd[1]: Started sshd@2-172.31.19.88:22-139.178.68.195:44378.service - OpenSSH per-connection server daemon (139.178.68.195:44378). Dec 13 01:55:02.282727 systemd-logind[2008]: Removed session 2. Dec 13 01:55:02.298679 amazon-ssm-agent[2201]: 2024-12-13 01:55:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2272) started Dec 13 01:55:02.400066 amazon-ssm-agent[2201]: 2024-12-13 01:55:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:02.425210 ntpd[2000]: Listen normally on 7 eth0 [fe80::468:b9ff:fe43:9acb%2]:123 Dec 13 01:55:02.425974 ntpd[2000]: 13 Dec 01:55:02 ntpd[2000]: Listen normally on 7 eth0 [fe80::468:b9ff:fe43:9acb%2]:123 Dec 13 01:55:02.483836 sshd[2279]: Accepted publickey for core from 139.178.68.195 port 44378 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:02.486234 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:02.496508 systemd-logind[2008]: New session 3 of user core. Dec 13 01:55:02.504459 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:02.631509 sshd[2279]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:02.638911 systemd[1]: sshd@2-172.31.19.88:22-139.178.68.195:44378.service: Deactivated successfully. Dec 13 01:55:02.643812 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:02.648760 systemd-logind[2008]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:02.674695 systemd[1]: Started sshd@3-172.31.19.88:22-139.178.68.195:44394.service - OpenSSH per-connection server daemon (139.178.68.195:44394). Dec 13 01:55:02.680207 systemd-logind[2008]: Removed session 3. Dec 13 01:55:02.795224 kubelet[2253]: E1213 01:55:02.795017 2253 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:02.802708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:02.803448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:02.804177 systemd[1]: kubelet.service: Consumed 1.398s CPU time. Dec 13 01:55:02.851538 sshd[2292]: Accepted publickey for core from 139.178.68.195 port 44394 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:02.854669 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:02.864273 systemd-logind[2008]: New session 4 of user core. Dec 13 01:55:02.875482 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:03.005297 sshd[2292]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:03.011336 systemd[1]: sshd@3-172.31.19.88:22-139.178.68.195:44394.service: Deactivated successfully. Dec 13 01:55:03.015192 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:03.018192 systemd-logind[2008]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:03.020883 systemd-logind[2008]: Removed session 4. Dec 13 01:55:03.050681 systemd[1]: Started sshd@4-172.31.19.88:22-139.178.68.195:44408.service - OpenSSH per-connection server daemon (139.178.68.195:44408). Dec 13 01:55:03.221153 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 44408 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:03.224010 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:03.232222 systemd-logind[2008]: New session 5 of user core. Dec 13 01:55:03.243365 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:03.389372 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:03.390014 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:03.409720 sudo[2304]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:03.433612 sshd[2301]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:03.441989 systemd[1]: sshd@4-172.31.19.88:22-139.178.68.195:44408.service: Deactivated successfully. Dec 13 01:55:03.446348 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:03.448011 systemd-logind[2008]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:03.450085 systemd-logind[2008]: Removed session 5. Dec 13 01:55:03.478634 systemd[1]: Started sshd@5-172.31.19.88:22-139.178.68.195:44410.service - OpenSSH per-connection server daemon (139.178.68.195:44410). Dec 13 01:55:03.664274 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 44410 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:03.667550 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:03.677621 systemd-logind[2008]: New session 6 of user core. Dec 13 01:55:03.690451 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:03.798516 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:03.799356 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:03.806218 sudo[2313]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:03.816552 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:03.817853 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:03.843723 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:03.846294 auditctl[2316]: No rules Dec 13 01:55:03.846994 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:03.848210 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:03.861930 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:03.904418 augenrules[2334]: No rules Dec 13 01:55:03.906846 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:03.909823 sudo[2312]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:03.933455 sshd[2309]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:03.940711 systemd-logind[2008]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:03.942377 systemd[1]: sshd@5-172.31.19.88:22-139.178.68.195:44410.service: Deactivated successfully. Dec 13 01:55:03.946004 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:03.949540 systemd-logind[2008]: Removed session 6. Dec 13 01:55:03.978657 systemd[1]: Started sshd@6-172.31.19.88:22-139.178.68.195:44418.service - OpenSSH per-connection server daemon (139.178.68.195:44418). Dec 13 01:55:04.158515 sshd[2342]: Accepted publickey for core from 139.178.68.195 port 44418 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:04.161374 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:04.170565 systemd-logind[2008]: New session 7 of user core. Dec 13 01:55:04.181415 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:04.287224 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:04.287862 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:04.866603 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:55:04.868539 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:55:05.348094 dockerd[2363]: time="2024-12-13T01:55:05.346863712Z" level=info msg="Starting up" Dec 13 01:55:05.835986 systemd-resolved[1927]: Clock change detected. Flushing caches. Dec 13 01:55:05.909776 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3743598505-merged.mount: Deactivated successfully. Dec 13 01:55:06.003931 dockerd[2363]: time="2024-12-13T01:55:06.003367572Z" level=info msg="Loading containers: start." Dec 13 01:55:06.160684 kernel: Initializing XFRM netlink socket Dec 13 01:55:06.199066 (udev-worker)[2386]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:06.293230 systemd-networkd[1926]: docker0: Link UP Dec 13 01:55:06.321925 dockerd[2363]: time="2024-12-13T01:55:06.321873962Z" level=info msg="Loading containers: done." Dec 13 01:55:06.346543 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck566878792-merged.mount: Deactivated successfully. Dec 13 01:55:06.350088 dockerd[2363]: time="2024-12-13T01:55:06.349371302Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:55:06.350088 dockerd[2363]: time="2024-12-13T01:55:06.349523078Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:55:06.350088 dockerd[2363]: time="2024-12-13T01:55:06.349743230Z" level=info msg="Daemon has completed initialization" Dec 13 01:55:06.403123 dockerd[2363]: time="2024-12-13T01:55:06.403018010Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:55:06.403346 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:55:07.593300 containerd[2044]: time="2024-12-13T01:55:07.593227636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:55:08.283185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834755843.mount: Deactivated successfully. Dec 13 01:55:10.223961 containerd[2044]: time="2024-12-13T01:55:10.223848509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:10.226403 containerd[2044]: time="2024-12-13T01:55:10.226318349Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Dec 13 01:55:10.227781 containerd[2044]: time="2024-12-13T01:55:10.227705741Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:10.235734 containerd[2044]: time="2024-12-13T01:55:10.235663229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:10.238116 containerd[2044]: time="2024-12-13T01:55:10.238053713Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.644758457s" Dec 13 01:55:10.238232 containerd[2044]: time="2024-12-13T01:55:10.238113317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 01:55:10.275353 containerd[2044]: time="2024-12-13T01:55:10.275263242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:55:12.562507 containerd[2044]: time="2024-12-13T01:55:12.560922441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:12.562507 containerd[2044]: time="2024-12-13T01:55:12.561847401Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Dec 13 01:55:12.567146 containerd[2044]: time="2024-12-13T01:55:12.567048693Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:12.570238 containerd[2044]: time="2024-12-13T01:55:12.569563377Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.294239511s" Dec 13 01:55:12.570238 containerd[2044]: time="2024-12-13T01:55:12.569705985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 01:55:12.572663 containerd[2044]: time="2024-12-13T01:55:12.571381449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:12.613260 containerd[2044]: time="2024-12-13T01:55:12.613194249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:55:13.464022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:13.476977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:13.814143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:13.821010 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:13.923246 kubelet[2586]: E1213 01:55:13.923157 2586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:13.932420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:13.933788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:14.249213 containerd[2044]: time="2024-12-13T01:55:14.249016281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:14.252608 containerd[2044]: time="2024-12-13T01:55:14.252535305Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Dec 13 01:55:14.253289 containerd[2044]: time="2024-12-13T01:55:14.253229013Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:14.261404 containerd[2044]: time="2024-12-13T01:55:14.261326577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:14.263329 containerd[2044]: time="2024-12-13T01:55:14.262866669Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.649607488s" Dec 13 01:55:14.263329 containerd[2044]: time="2024-12-13T01:55:14.262925745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 01:55:14.299705 containerd[2044]: time="2024-12-13T01:55:14.299657050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:55:15.607938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763899808.mount: Deactivated successfully. Dec 13 01:55:16.071981 containerd[2044]: time="2024-12-13T01:55:16.071770582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:16.073684 containerd[2044]: time="2024-12-13T01:55:16.073411426Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Dec 13 01:55:16.074832 containerd[2044]: time="2024-12-13T01:55:16.074744482Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:16.078328 containerd[2044]: time="2024-12-13T01:55:16.078247894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:16.080396 containerd[2044]: time="2024-12-13T01:55:16.080019334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.780109072s" Dec 13 01:55:16.080396 containerd[2044]: time="2024-12-13T01:55:16.080076490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 01:55:16.117256 containerd[2044]: time="2024-12-13T01:55:16.117202019Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:55:16.742628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833991521.mount: Deactivated successfully. Dec 13 01:55:17.881428 containerd[2044]: time="2024-12-13T01:55:17.879213231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.886454 containerd[2044]: time="2024-12-13T01:55:17.885711087Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:55:17.890231 containerd[2044]: time="2024-12-13T01:55:17.888451563Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.901718 containerd[2044]: time="2024-12-13T01:55:17.901620759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.905391 containerd[2044]: time="2024-12-13T01:55:17.905312499Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.788045608s" Dec 13 01:55:17.905391 containerd[2044]: time="2024-12-13T01:55:17.905380659Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:55:17.944547 containerd[2044]: time="2024-12-13T01:55:17.944493652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:55:18.471241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634476330.mount: Deactivated successfully. Dec 13 01:55:18.480338 containerd[2044]: time="2024-12-13T01:55:18.480017402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.481766 containerd[2044]: time="2024-12-13T01:55:18.481701338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:55:18.483556 containerd[2044]: time="2024-12-13T01:55:18.483385310Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.489691 containerd[2044]: time="2024-12-13T01:55:18.488902550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.491262 containerd[2044]: time="2024-12-13T01:55:18.490702226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 546.148634ms" Dec 13 01:55:18.491262 containerd[2044]: time="2024-12-13T01:55:18.490767638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:55:18.530603 containerd[2044]: time="2024-12-13T01:55:18.530297127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:55:19.132782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454831531.mount: Deactivated successfully. Dec 13 01:55:22.625715 containerd[2044]: time="2024-12-13T01:55:22.625239211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.667691 containerd[2044]: time="2024-12-13T01:55:22.667605031Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Dec 13 01:55:22.692875 containerd[2044]: time="2024-12-13T01:55:22.692803567Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.742783 containerd[2044]: time="2024-12-13T01:55:22.742686115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.746337 containerd[2044]: time="2024-12-13T01:55:22.746220379Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.215864224s" Dec 13 01:55:22.746821 containerd[2044]: time="2024-12-13T01:55:22.746298655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 01:55:24.106378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:55:24.116822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:24.445995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:24.458299 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:24.539660 kubelet[2779]: E1213 01:55:24.538728 2779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:24.543578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:24.543960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:29.878511 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:30.021850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:30.030175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:30.080787 systemd[1]: Reloading requested from client PID 2796 ('systemctl') (unit session-7.scope)... Dec 13 01:55:30.080822 systemd[1]: Reloading... Dec 13 01:55:30.322777 zram_generator::config[2842]: No configuration found. Dec 13 01:55:30.560258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:30.732960 systemd[1]: Reloading finished in 651 ms. Dec 13 01:55:30.839346 (kubelet)[2891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:30.840558 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:30.842592 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:55:30.844837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:30.855537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:31.131762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:31.150238 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:31.232019 kubelet[2902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:31.232019 kubelet[2902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:31.232019 kubelet[2902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:31.232580 kubelet[2902]: I1213 01:55:31.232132 2902 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:32.926210 kubelet[2902]: I1213 01:55:32.926153 2902 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:55:32.926210 kubelet[2902]: I1213 01:55:32.926202 2902 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:32.926977 kubelet[2902]: I1213 01:55:32.926530 2902 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:55:32.959453 kubelet[2902]: I1213 01:55:32.959235 2902 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:32.960134 kubelet[2902]: E1213 01:55:32.959835 2902 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:32.976287 kubelet[2902]: I1213 01:55:32.976239 2902 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:32.979333 kubelet[2902]: I1213 01:55:32.979249 2902 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:32.979628 kubelet[2902]: I1213 01:55:32.979325 2902 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:32.979857 kubelet[2902]: I1213 01:55:32.979711 2902 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:32.979857 kubelet[2902]: I1213 01:55:32.979735 2902 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:32.979982 kubelet[2902]: I1213 01:55:32.979967 2902 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:32.981588 kubelet[2902]: I1213 01:55:32.981538 2902 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:55:32.981588 kubelet[2902]: I1213 01:55:32.981586 2902 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:32.981588 kubelet[2902]: I1213 01:55:32.981727 2902 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:32.981588 kubelet[2902]: I1213 01:55:32.981758 2902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:32.983900 kubelet[2902]: I1213 01:55:32.983864 2902 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:32.984429 kubelet[2902]: I1213 01:55:32.984390 2902 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:32.984733 kubelet[2902]: W1213 01:55:32.984710 2902 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:32.985992 kubelet[2902]: I1213 01:55:32.985962 2902 server.go:1264] "Started kubelet" Dec 13 01:55:32.986391 kubelet[2902]: W1213 01:55:32.986331 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:32.986551 kubelet[2902]: E1213 01:55:32.986527 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:32.986894 kubelet[2902]: W1213 01:55:32.986833 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-88&limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:32.987060 kubelet[2902]: E1213 01:55:32.987038 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-88&limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:32.995823 kubelet[2902]: I1213 01:55:32.994761 2902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:33.003746 kubelet[2902]: I1213 01:55:33.003397 2902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:33.006025 kubelet[2902]: I1213 01:55:33.005970 2902 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:55:33.006197 kubelet[2902]: I1213 01:55:33.005999 2902 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:33.008169 kubelet[2902]: I1213 01:55:33.008132 2902 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:55:33.009357 kubelet[2902]: I1213 01:55:33.008596 2902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:33.010183 kubelet[2902]: I1213 01:55:33.010131 2902 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:55:33.012048 kubelet[2902]: E1213 01:55:33.010533 2902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.88:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-88.181099c62da3d96e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-88,UID:ip-172-31-19-88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-88,},FirstTimestamp:2024-12-13 01:55:32.985928046 +0000 UTC m=+1.826811478,LastTimestamp:2024-12-13 01:55:32.985928046 +0000 UTC m=+1.826811478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-88,}" Dec 13 01:55:33.012820 kubelet[2902]: I1213 01:55:33.012303 2902 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:33.012820 kubelet[2902]: I1213 01:55:33.012460 2902 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:33.013577 kubelet[2902]: W1213 01:55:33.013301 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:33.014679 kubelet[2902]: E1213 01:55:33.014587 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:33.014679 kubelet[2902]: I1213 01:55:33.014051 2902 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:33.014893 kubelet[2902]: E1213 01:55:33.013858 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-88?timeout=10s\": dial tcp 172.31.19.88:6443: connect: connection refused" interval="200ms" Dec 13 01:55:33.017106 kubelet[2902]: E1213 01:55:33.016756 2902 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:33.018694 kubelet[2902]: I1213 01:55:33.018178 2902 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:33.059302 kubelet[2902]: I1213 01:55:33.059255 2902 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:33.059302 kubelet[2902]: I1213 01:55:33.059288 2902 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:33.059706 kubelet[2902]: I1213 01:55:33.059322 2902 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:33.075561 kubelet[2902]: I1213 01:55:33.075314 2902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:33.078223 kubelet[2902]: I1213 01:55:33.078181 2902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:33.078810 kubelet[2902]: I1213 01:55:33.078441 2902 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:33.078810 kubelet[2902]: I1213 01:55:33.078478 2902 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:55:33.078810 kubelet[2902]: E1213 01:55:33.078549 2902 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:33.080928 kubelet[2902]: I1213 01:55:33.080878 2902 policy_none.go:49] "None policy: Start" Dec 13 01:55:33.083330 kubelet[2902]: W1213 01:55:33.083254 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:33.083330 kubelet[2902]: E1213 01:55:33.083327 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:33.086260 kubelet[2902]: I1213 01:55:33.085498 2902 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:33.086260 kubelet[2902]: I1213 01:55:33.085543 2902 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:33.101199 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:55:33.109697 kubelet[2902]: I1213 01:55:33.109621 2902 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-88" Dec 13 01:55:33.110188 kubelet[2902]: E1213 01:55:33.110124 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.88:6443/api/v1/nodes\": dial tcp 172.31.19.88:6443: connect: connection refused" node="ip-172-31-19-88" Dec 13 01:55:33.120631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:55:33.135785 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:55:33.151235 kubelet[2902]: I1213 01:55:33.151170 2902 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:33.152218 kubelet[2902]: I1213 01:55:33.151777 2902 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:55:33.152218 kubelet[2902]: I1213 01:55:33.152246 2902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:33.156034 kubelet[2902]: E1213 01:55:33.155963 2902 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-88\" not found" Dec 13 01:55:33.179699 kubelet[2902]: I1213 01:55:33.179404 2902 topology_manager.go:215] "Topology Admit Handler" podUID="4bcba082c153926560e0eb4a95995e1c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-88" Dec 13 01:55:33.182444 kubelet[2902]: I1213 01:55:33.182072 2902 topology_manager.go:215] "Topology Admit Handler" podUID="968105a50f2560fbd4cff944805892de" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:33.185260 kubelet[2902]: I1213 01:55:33.185190 2902 topology_manager.go:215] "Topology Admit Handler" podUID="6678f48915fec6f8c692664951ccd1ca" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-88" Dec 13 01:55:33.200628 systemd[1]: Created slice kubepods-burstable-pod968105a50f2560fbd4cff944805892de.slice - libcontainer container kubepods-burstable-pod968105a50f2560fbd4cff944805892de.slice. Dec 13 01:55:33.211461 kubelet[2902]: I1213 01:55:33.210939 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:33.211461 kubelet[2902]: I1213 01:55:33.211003 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:33.211461 kubelet[2902]: I1213 01:55:33.211058 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bcba082c153926560e0eb4a95995e1c-ca-certs\") pod \"kube-apiserver-ip-172-31-19-88\" (UID: \"4bcba082c153926560e0eb4a95995e1c\") " pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:33.211461 kubelet[2902]: I1213 01:55:33.211092 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bcba082c153926560e0eb4a95995e1c-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-88\" (UID: \"4bcba082c153926560e0eb4a95995e1c\") " pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:33.211461 kubelet[2902]: I1213 01:55:33.211132 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bcba082c153926560e0eb4a95995e1c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-88\" (UID: \"4bcba082c153926560e0eb4a95995e1c\") " pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:33.213070 kubelet[2902]: I1213 01:55:33.211168 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:33.213070 kubelet[2902]: I1213 01:55:33.211201 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:33.213070 kubelet[2902]: I1213 01:55:33.211238 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:33.213070 kubelet[2902]: I1213 01:55:33.211278 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6678f48915fec6f8c692664951ccd1ca-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-88\" (UID: \"6678f48915fec6f8c692664951ccd1ca\") " pod="kube-system/kube-scheduler-ip-172-31-19-88" Dec 13 01:55:33.216563 kubelet[2902]: E1213 01:55:33.216499 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-88?timeout=10s\": dial tcp 172.31.19.88:6443: connect: connection refused" interval="400ms" Dec 13 01:55:33.219850 systemd[1]: Created slice kubepods-burstable-pod4bcba082c153926560e0eb4a95995e1c.slice - libcontainer container kubepods-burstable-pod4bcba082c153926560e0eb4a95995e1c.slice. Dec 13 01:55:33.235910 systemd[1]: Created slice kubepods-burstable-pod6678f48915fec6f8c692664951ccd1ca.slice - libcontainer container kubepods-burstable-pod6678f48915fec6f8c692664951ccd1ca.slice. Dec 13 01:55:33.312773 kubelet[2902]: I1213 01:55:33.312716 2902 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-88" Dec 13 01:55:33.313193 kubelet[2902]: E1213 01:55:33.313145 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.88:6443/api/v1/nodes\": dial tcp 172.31.19.88:6443: connect: connection refused" node="ip-172-31-19-88" Dec 13 01:55:33.514781 containerd[2044]: time="2024-12-13T01:55:33.514602521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-88,Uid:968105a50f2560fbd4cff944805892de,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:33.532702 containerd[2044]: time="2024-12-13T01:55:33.532451693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-88,Uid:4bcba082c153926560e0eb4a95995e1c,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:33.541900 containerd[2044]: time="2024-12-13T01:55:33.541776593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-88,Uid:6678f48915fec6f8c692664951ccd1ca,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:33.617417 kubelet[2902]: E1213 01:55:33.617332 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-88?timeout=10s\": dial tcp 172.31.19.88:6443: connect: connection refused" interval="800ms" Dec 13 01:55:33.716023 kubelet[2902]: I1213 01:55:33.715964 2902 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-88" Dec 13 01:55:33.716488 kubelet[2902]: E1213 01:55:33.716441 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.88:6443/api/v1/nodes\": dial tcp 172.31.19.88:6443: connect: connection refused" node="ip-172-31-19-88" Dec 13 01:55:33.883235 kubelet[2902]: W1213 01:55:33.883003 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-88&limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:33.883235 kubelet[2902]: E1213 01:55:33.883103 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-88&limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.074959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709536641.mount: Deactivated successfully. Dec 13 01:55:34.095170 containerd[2044]: time="2024-12-13T01:55:34.094796464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:34.101934 containerd[2044]: time="2024-12-13T01:55:34.101852428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:34.104353 containerd[2044]: time="2024-12-13T01:55:34.104247856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:34.106485 containerd[2044]: time="2024-12-13T01:55:34.106393360Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:34.109817 containerd[2044]: time="2024-12-13T01:55:34.109688668Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:34.111612 containerd[2044]: time="2024-12-13T01:55:34.111452920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:34.114279 containerd[2044]: time="2024-12-13T01:55:34.114081508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:34.116464 containerd[2044]: time="2024-12-13T01:55:34.116362456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:34.117745 kubelet[2902]: W1213 01:55:34.117670 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.119012 kubelet[2902]: E1213 01:55:34.118197 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.121880 containerd[2044]: time="2024-12-13T01:55:34.121544452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 606.802875ms" Dec 13 01:55:34.126126 containerd[2044]: time="2024-12-13T01:55:34.126058636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 593.453871ms" Dec 13 01:55:34.133376 containerd[2044]: time="2024-12-13T01:55:34.133160812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.275871ms" Dec 13 01:55:34.230818 kubelet[2902]: W1213 01:55:34.230182 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.230818 kubelet[2902]: E1213 01:55:34.230273 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.295370 kubelet[2902]: W1213 01:55:34.295014 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.295370 kubelet[2902]: E1213 01:55:34.295145 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:34.340409 containerd[2044]: time="2024-12-13T01:55:34.339950657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:34.340409 containerd[2044]: time="2024-12-13T01:55:34.340061321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:34.340409 containerd[2044]: time="2024-12-13T01:55:34.340114937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:34.340409 containerd[2044]: time="2024-12-13T01:55:34.340284725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:34.347625 containerd[2044]: time="2024-12-13T01:55:34.346750193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:34.347986 containerd[2044]: time="2024-12-13T01:55:34.347698421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:34.347986 containerd[2044]: time="2024-12-13T01:55:34.347753189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:34.349030 containerd[2044]: time="2024-12-13T01:55:34.348935645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:34.351446 containerd[2044]: time="2024-12-13T01:55:34.351028517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:34.351446 containerd[2044]: time="2024-12-13T01:55:34.351141605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:34.351446 containerd[2044]: time="2024-12-13T01:55:34.351195029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:34.351446 containerd[2044]: time="2024-12-13T01:55:34.351357233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:34.406081 systemd[1]: Started cri-containerd-f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987.scope - libcontainer container f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987. Dec 13 01:55:34.418347 kubelet[2902]: E1213 01:55:34.418271 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-88?timeout=10s\": dial tcp 172.31.19.88:6443: connect: connection refused" interval="1.6s" Dec 13 01:55:34.422306 systemd[1]: Started cri-containerd-f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a.scope - libcontainer container f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a. Dec 13 01:55:34.438210 systemd[1]: Started cri-containerd-785d922d4abf0bd505c882f68357c6dbcfa7eafb45c69d01686a42870a99ef67.scope - libcontainer container 785d922d4abf0bd505c882f68357c6dbcfa7eafb45c69d01686a42870a99ef67. Dec 13 01:55:34.524098 kubelet[2902]: I1213 01:55:34.523598 2902 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-88" Dec 13 01:55:34.525597 kubelet[2902]: E1213 01:55:34.525470 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.88:6443/api/v1/nodes\": dial tcp 172.31.19.88:6443: connect: connection refused" node="ip-172-31-19-88" Dec 13 01:55:34.557282 containerd[2044]: time="2024-12-13T01:55:34.557199234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-88,Uid:6678f48915fec6f8c692664951ccd1ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987\"" Dec 13 01:55:34.570129 containerd[2044]: time="2024-12-13T01:55:34.568964046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-88,Uid:968105a50f2560fbd4cff944805892de,Namespace:kube-system,Attempt:0,} returns sandbox id \"f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a\"" Dec 13 01:55:34.570129 containerd[2044]: time="2024-12-13T01:55:34.569510106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-88,Uid:4bcba082c153926560e0eb4a95995e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"785d922d4abf0bd505c882f68357c6dbcfa7eafb45c69d01686a42870a99ef67\"" Dec 13 01:55:34.571267 containerd[2044]: time="2024-12-13T01:55:34.571203582Z" level=info msg="CreateContainer within sandbox \"f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:55:34.579216 containerd[2044]: time="2024-12-13T01:55:34.579153990Z" level=info msg="CreateContainer within sandbox \"f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:55:34.580765 containerd[2044]: time="2024-12-13T01:55:34.580697622Z" level=info msg="CreateContainer within sandbox \"785d922d4abf0bd505c882f68357c6dbcfa7eafb45c69d01686a42870a99ef67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:55:34.630360 containerd[2044]: time="2024-12-13T01:55:34.630302659Z" level=info msg="CreateContainer within sandbox \"f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61\"" Dec 13 01:55:34.631775 containerd[2044]: time="2024-12-13T01:55:34.631608931Z" level=info msg="StartContainer for \"6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61\"" Dec 13 01:55:34.639585 containerd[2044]: time="2024-12-13T01:55:34.639470935Z" level=info msg="CreateContainer within sandbox \"785d922d4abf0bd505c882f68357c6dbcfa7eafb45c69d01686a42870a99ef67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78baafc4921f2e934d7cbbf82ecb08e8057c12bb39d295513859461be977ae8b\"" Dec 13 01:55:34.642214 containerd[2044]: time="2024-12-13T01:55:34.640719595Z" level=info msg="StartContainer for \"78baafc4921f2e934d7cbbf82ecb08e8057c12bb39d295513859461be977ae8b\"" Dec 13 01:55:34.642214 containerd[2044]: time="2024-12-13T01:55:34.642050035Z" level=info msg="CreateContainer within sandbox \"f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc\"" Dec 13 01:55:34.643425 containerd[2044]: time="2024-12-13T01:55:34.643362499Z" level=info msg="StartContainer for \"77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc\"" Dec 13 01:55:34.701080 systemd[1]: Started cri-containerd-6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61.scope - libcontainer container 6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61. Dec 13 01:55:34.720968 systemd[1]: Started cri-containerd-78baafc4921f2e934d7cbbf82ecb08e8057c12bb39d295513859461be977ae8b.scope - libcontainer container 78baafc4921f2e934d7cbbf82ecb08e8057c12bb39d295513859461be977ae8b. Dec 13 01:55:34.740972 systemd[1]: Started cri-containerd-77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc.scope - libcontainer container 77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc. Dec 13 01:55:34.832157 containerd[2044]: time="2024-12-13T01:55:34.832100528Z" level=info msg="StartContainer for \"78baafc4921f2e934d7cbbf82ecb08e8057c12bb39d295513859461be977ae8b\" returns successfully" Dec 13 01:55:34.883253 containerd[2044]: time="2024-12-13T01:55:34.883187804Z" level=info msg="StartContainer for \"6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61\" returns successfully" Dec 13 01:55:34.883652 containerd[2044]: time="2024-12-13T01:55:34.883595888Z" level=info msg="StartContainer for \"77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc\" returns successfully" Dec 13 01:55:34.971300 kubelet[2902]: E1213 01:55:34.971170 2902 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.88:6443: connect: connection refused Dec 13 01:55:36.128351 kubelet[2902]: I1213 01:55:36.128285 2902 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-88" Dec 13 01:55:38.986472 kubelet[2902]: I1213 01:55:38.986402 2902 apiserver.go:52] "Watching apiserver" Dec 13 01:55:39.043568 kubelet[2902]: E1213 01:55:39.043495 2902 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-88\" not found" node="ip-172-31-19-88" Dec 13 01:55:39.109121 kubelet[2902]: I1213 01:55:39.109065 2902 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:55:39.142174 kubelet[2902]: I1213 01:55:39.142117 2902 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-88" Dec 13 01:55:39.207121 kubelet[2902]: E1213 01:55:39.206958 2902 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-88.181099c62da3d96e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-88,UID:ip-172-31-19-88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-88,},FirstTimestamp:2024-12-13 01:55:32.985928046 +0000 UTC m=+1.826811478,LastTimestamp:2024-12-13 01:55:32.985928046 +0000 UTC m=+1.826811478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-88,}" Dec 13 01:55:41.613539 systemd[1]: Reloading requested from client PID 3181 ('systemctl') (unit session-7.scope)... Dec 13 01:55:41.613576 systemd[1]: Reloading... Dec 13 01:55:41.832701 zram_generator::config[3224]: No configuration found. Dec 13 01:55:42.086409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:42.287382 systemd[1]: Reloading finished in 673 ms. Dec 13 01:55:42.359375 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:42.360426 kubelet[2902]: I1213 01:55:42.359607 2902 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:42.372414 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:55:42.373027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:42.373245 systemd[1]: kubelet.service: Consumed 2.588s CPU time, 113.9M memory peak, 0B memory swap peak. Dec 13 01:55:42.382275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:42.703818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:42.720247 (kubelet)[3281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:42.854522 kubelet[3281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:42.854522 kubelet[3281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:42.854522 kubelet[3281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:42.855083 kubelet[3281]: I1213 01:55:42.854596 3281 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:42.865728 kubelet[3281]: I1213 01:55:42.865606 3281 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:55:42.865728 kubelet[3281]: I1213 01:55:42.865677 3281 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:42.866071 kubelet[3281]: I1213 01:55:42.866039 3281 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:55:42.868774 kubelet[3281]: I1213 01:55:42.868723 3281 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:55:42.871541 kubelet[3281]: I1213 01:55:42.871325 3281 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:42.882995 kubelet[3281]: I1213 01:55:42.882946 3281 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:42.883564 kubelet[3281]: I1213 01:55:42.883513 3281 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:42.883935 kubelet[3281]: I1213 01:55:42.883566 3281 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:42.884118 kubelet[3281]: I1213 01:55:42.883935 3281 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:42.884118 kubelet[3281]: I1213 01:55:42.883958 3281 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:42.884118 kubelet[3281]: I1213 01:55:42.884045 3281 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:42.886563 kubelet[3281]: I1213 01:55:42.884227 3281 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:55:42.886563 kubelet[3281]: I1213 01:55:42.884250 3281 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:42.886563 kubelet[3281]: I1213 01:55:42.884299 3281 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:42.886563 kubelet[3281]: I1213 01:55:42.884334 3281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:42.888108 kubelet[3281]: I1213 01:55:42.887881 3281 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:42.892544 kubelet[3281]: I1213 01:55:42.890596 3281 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:42.895670 kubelet[3281]: I1213 01:55:42.893479 3281 server.go:1264] "Started kubelet" Dec 13 01:55:42.911587 kubelet[3281]: I1213 01:55:42.911545 3281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:42.926735 kubelet[3281]: I1213 01:55:42.926465 3281 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:42.939786 kubelet[3281]: I1213 01:55:42.926828 3281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:42.939786 kubelet[3281]: I1213 01:55:42.939337 3281 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:42.939786 kubelet[3281]: I1213 01:55:42.935816 3281 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:55:42.939786 kubelet[3281]: I1213 01:55:42.939676 3281 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:55:42.943681 kubelet[3281]: I1213 01:55:42.935765 3281 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:42.943681 kubelet[3281]: I1213 01:55:42.941909 3281 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:55:42.971596 kubelet[3281]: I1213 01:55:42.971227 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:42.976286 kubelet[3281]: I1213 01:55:42.976013 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:42.977916 kubelet[3281]: I1213 01:55:42.976758 3281 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:42.977916 kubelet[3281]: I1213 01:55:42.976819 3281 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:55:42.977916 kubelet[3281]: E1213 01:55:42.976896 3281 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:42.986915 kubelet[3281]: I1213 01:55:42.986820 3281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:43.001951 kubelet[3281]: I1213 01:55:43.001883 3281 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:43.002082 kubelet[3281]: I1213 01:55:43.002043 3281 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:43.006444 kubelet[3281]: E1213 01:55:43.005506 3281 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:43.046023 kubelet[3281]: I1213 01:55:43.045953 3281 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-88" Dec 13 01:55:43.069890 kubelet[3281]: I1213 01:55:43.068933 3281 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-88" Dec 13 01:55:43.069890 kubelet[3281]: I1213 01:55:43.069121 3281 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-88" Dec 13 01:55:43.077724 kubelet[3281]: E1213 01:55:43.076953 3281 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:55:43.139899 kubelet[3281]: I1213 01:55:43.139616 3281 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:43.139899 kubelet[3281]: I1213 01:55:43.139858 3281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:43.140107 kubelet[3281]: I1213 01:55:43.139947 3281 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:43.140303 kubelet[3281]: I1213 01:55:43.140258 3281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:55:43.140372 kubelet[3281]: I1213 01:55:43.140303 3281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:55:43.140372 kubelet[3281]: I1213 01:55:43.140345 3281 policy_none.go:49] "None policy: Start" Dec 13 01:55:43.144134 kubelet[3281]: I1213 01:55:43.142805 3281 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:43.144134 kubelet[3281]: I1213 01:55:43.142850 3281 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:43.144134 kubelet[3281]: I1213 01:55:43.143129 3281 state_mem.go:75] "Updated machine memory state" Dec 13 01:55:43.154470 kubelet[3281]: I1213 01:55:43.154436 3281 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:43.155719 kubelet[3281]: I1213 01:55:43.155602 3281 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:55:43.156612 kubelet[3281]: I1213 01:55:43.156581 3281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:43.278791 kubelet[3281]: I1213 01:55:43.277455 3281 topology_manager.go:215] "Topology Admit Handler" podUID="4bcba082c153926560e0eb4a95995e1c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-88" Dec 13 01:55:43.278791 kubelet[3281]: I1213 01:55:43.278299 3281 topology_manager.go:215] "Topology Admit Handler" podUID="968105a50f2560fbd4cff944805892de" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:43.278791 kubelet[3281]: I1213 01:55:43.278418 3281 topology_manager.go:215] "Topology Admit Handler" podUID="6678f48915fec6f8c692664951ccd1ca" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-88" Dec 13 01:55:43.292258 kubelet[3281]: E1213 01:55:43.292159 3281 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-88\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-88" Dec 13 01:55:43.345413 kubelet[3281]: I1213 01:55:43.345354 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bcba082c153926560e0eb4a95995e1c-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-88\" (UID: \"4bcba082c153926560e0eb4a95995e1c\") " pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:43.345413 kubelet[3281]: I1213 01:55:43.345442 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:43.345413 kubelet[3281]: I1213 01:55:43.345493 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:43.345907 kubelet[3281]: I1213 01:55:43.345530 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:43.345907 kubelet[3281]: I1213 01:55:43.345571 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:43.345907 kubelet[3281]: I1213 01:55:43.345627 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6678f48915fec6f8c692664951ccd1ca-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-88\" (UID: \"6678f48915fec6f8c692664951ccd1ca\") " pod="kube-system/kube-scheduler-ip-172-31-19-88" Dec 13 01:55:43.345907 kubelet[3281]: I1213 01:55:43.345699 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bcba082c153926560e0eb4a95995e1c-ca-certs\") pod \"kube-apiserver-ip-172-31-19-88\" (UID: \"4bcba082c153926560e0eb4a95995e1c\") " pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:43.345907 kubelet[3281]: I1213 01:55:43.345750 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bcba082c153926560e0eb4a95995e1c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-88\" (UID: \"4bcba082c153926560e0eb4a95995e1c\") " pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:43.346221 kubelet[3281]: I1213 01:55:43.345798 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/968105a50f2560fbd4cff944805892de-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-88\" (UID: \"968105a50f2560fbd4cff944805892de\") " pod="kube-system/kube-controller-manager-ip-172-31-19-88" Dec 13 01:55:43.885679 update_engine[2010]: I20241213 01:55:43.883848 2010 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:43.886277 kubelet[3281]: I1213 01:55:43.886020 3281 apiserver.go:52] "Watching apiserver" Dec 13 01:55:43.940090 kubelet[3281]: I1213 01:55:43.939989 3281 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:55:44.039923 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3333) Dec 13 01:55:44.137286 kubelet[3281]: E1213 01:55:44.137118 3281 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-88\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-88" Dec 13 01:55:44.541402 kubelet[3281]: I1213 01:55:44.541281 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-88" podStartSLOduration=4.541257916 podStartE2EDuration="4.541257916s" podCreationTimestamp="2024-12-13 01:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:44.456128283 +0000 UTC m=+1.726183485" watchObservedRunningTime="2024-12-13 01:55:44.541257916 +0000 UTC m=+1.811313094" Dec 13 01:55:44.574079 kubelet[3281]: I1213 01:55:44.573484 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-88" podStartSLOduration=1.5734612719999999 podStartE2EDuration="1.573461272s" podCreationTimestamp="2024-12-13 01:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:44.542933104 +0000 UTC m=+1.812988282" watchObservedRunningTime="2024-12-13 01:55:44.573461272 +0000 UTC m=+1.843516450" Dec 13 01:55:46.201931 kubelet[3281]: I1213 01:55:46.201572 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-88" podStartSLOduration=3.201529588 podStartE2EDuration="3.201529588s" podCreationTimestamp="2024-12-13 01:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:44.574948288 +0000 UTC m=+1.845003466" watchObservedRunningTime="2024-12-13 01:55:46.201529588 +0000 UTC m=+3.471584754" Dec 13 01:55:48.467324 sudo[2345]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:48.492735 sshd[2342]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:48.499579 systemd[1]: sshd@6-172.31.19.88:22-139.178.68.195:44418.service: Deactivated successfully. Dec 13 01:55:48.503355 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:48.504101 systemd[1]: session-7.scope: Consumed 10.773s CPU time, 187.0M memory peak, 0B memory swap peak. Dec 13 01:55:48.507941 systemd-logind[2008]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:48.510598 systemd-logind[2008]: Removed session 7. Dec 13 01:55:54.844536 kubelet[3281]: I1213 01:55:54.844310 3281 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:55:54.845166 containerd[2044]: time="2024-12-13T01:55:54.844973631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:55:54.848224 kubelet[3281]: I1213 01:55:54.847879 3281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:55:55.502125 kubelet[3281]: I1213 01:55:55.501386 3281 topology_manager.go:215] "Topology Admit Handler" podUID="af5985fb-2011-45fe-99cb-57a20f5d9571" podNamespace="kube-system" podName="kube-proxy-5h4bx" Dec 13 01:55:55.511018 kubelet[3281]: W1213 01:55:55.510902 3281 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:55:55.511018 kubelet[3281]: E1213 01:55:55.510978 3281 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:55:55.522960 systemd[1]: Created slice kubepods-besteffort-podaf5985fb_2011_45fe_99cb_57a20f5d9571.slice - libcontainer container kubepods-besteffort-podaf5985fb_2011_45fe_99cb_57a20f5d9571.slice. Dec 13 01:55:55.625526 kubelet[3281]: I1213 01:55:55.624416 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5985fb-2011-45fe-99cb-57a20f5d9571-xtables-lock\") pod \"kube-proxy-5h4bx\" (UID: \"af5985fb-2011-45fe-99cb-57a20f5d9571\") " pod="kube-system/kube-proxy-5h4bx" Dec 13 01:55:55.625526 kubelet[3281]: I1213 01:55:55.624481 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af5985fb-2011-45fe-99cb-57a20f5d9571-kube-proxy\") pod \"kube-proxy-5h4bx\" (UID: \"af5985fb-2011-45fe-99cb-57a20f5d9571\") " pod="kube-system/kube-proxy-5h4bx" Dec 13 01:55:55.625526 kubelet[3281]: I1213 01:55:55.624529 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5985fb-2011-45fe-99cb-57a20f5d9571-lib-modules\") pod \"kube-proxy-5h4bx\" (UID: \"af5985fb-2011-45fe-99cb-57a20f5d9571\") " pod="kube-system/kube-proxy-5h4bx" Dec 13 01:55:55.625526 kubelet[3281]: I1213 01:55:55.624570 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29qt\" (UniqueName: \"kubernetes.io/projected/af5985fb-2011-45fe-99cb-57a20f5d9571-kube-api-access-z29qt\") pod \"kube-proxy-5h4bx\" (UID: \"af5985fb-2011-45fe-99cb-57a20f5d9571\") " pod="kube-system/kube-proxy-5h4bx" Dec 13 01:55:55.749670 kubelet[3281]: I1213 01:55:55.748268 3281 topology_manager.go:215] "Topology Admit Handler" podUID="d12a2abb-9d82-465d-8d90-064f6397b214" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-hbtzl" Dec 13 01:55:55.761000 kubelet[3281]: W1213 01:55:55.757794 3281 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:55:55.761000 kubelet[3281]: E1213 01:55:55.757872 3281 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:55:55.761977 kubelet[3281]: W1213 01:55:55.761918 3281 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:55:55.762496 kubelet[3281]: E1213 01:55:55.762464 3281 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:55:55.770001 systemd[1]: Created slice kubepods-besteffort-podd12a2abb_9d82_465d_8d90_064f6397b214.slice - libcontainer container kubepods-besteffort-podd12a2abb_9d82_465d_8d90_064f6397b214.slice. Dec 13 01:55:55.926345 kubelet[3281]: I1213 01:55:55.926263 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d12a2abb-9d82-465d-8d90-064f6397b214-var-lib-calico\") pod \"tigera-operator-7bc55997bb-hbtzl\" (UID: \"d12a2abb-9d82-465d-8d90-064f6397b214\") " pod="tigera-operator/tigera-operator-7bc55997bb-hbtzl" Dec 13 01:55:55.926927 kubelet[3281]: I1213 01:55:55.926368 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tzq\" (UniqueName: \"kubernetes.io/projected/d12a2abb-9d82-465d-8d90-064f6397b214-kube-api-access-s4tzq\") pod \"tigera-operator-7bc55997bb-hbtzl\" (UID: \"d12a2abb-9d82-465d-8d90-064f6397b214\") " pod="tigera-operator/tigera-operator-7bc55997bb-hbtzl" Dec 13 01:55:56.725928 kubelet[3281]: E1213 01:55:56.725866 3281 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:56.726109 kubelet[3281]: E1213 01:55:56.726011 3281 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af5985fb-2011-45fe-99cb-57a20f5d9571-kube-proxy podName:af5985fb-2011-45fe-99cb-57a20f5d9571 nodeName:}" failed. No retries permitted until 2024-12-13 01:55:57.225975512 +0000 UTC m=+14.496030690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/af5985fb-2011-45fe-99cb-57a20f5d9571-kube-proxy") pod "kube-proxy-5h4bx" (UID: "af5985fb-2011-45fe-99cb-57a20f5d9571") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:57.291466 containerd[2044]: time="2024-12-13T01:55:57.291406647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-hbtzl,Uid:d12a2abb-9d82-465d-8d90-064f6397b214,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:55:57.340239 containerd[2044]: time="2024-12-13T01:55:57.340125039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5h4bx,Uid:af5985fb-2011-45fe-99cb-57a20f5d9571,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:57.353049 containerd[2044]: time="2024-12-13T01:55:57.352276671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:57.354220 containerd[2044]: time="2024-12-13T01:55:57.354128367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:57.354405 containerd[2044]: time="2024-12-13T01:55:57.354281835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:57.355617 containerd[2044]: time="2024-12-13T01:55:57.354998055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:57.411134 systemd[1]: Started cri-containerd-04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648.scope - libcontainer container 04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648. Dec 13 01:55:57.426963 containerd[2044]: time="2024-12-13T01:55:57.425852764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:57.426963 containerd[2044]: time="2024-12-13T01:55:57.425961916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:57.426963 containerd[2044]: time="2024-12-13T01:55:57.425998792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:57.426963 containerd[2044]: time="2024-12-13T01:55:57.426144460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:57.466796 systemd[1]: Started cri-containerd-6f726401898744e24c2bfab1ac52a0b0a7af1a4d405de648e249a46baada3844.scope - libcontainer container 6f726401898744e24c2bfab1ac52a0b0a7af1a4d405de648e249a46baada3844. Dec 13 01:55:57.494265 containerd[2044]: time="2024-12-13T01:55:57.494198584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-hbtzl,Uid:d12a2abb-9d82-465d-8d90-064f6397b214,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648\"" Dec 13 01:55:57.501311 containerd[2044]: time="2024-12-13T01:55:57.501253120Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:55:57.529459 containerd[2044]: time="2024-12-13T01:55:57.529388260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5h4bx,Uid:af5985fb-2011-45fe-99cb-57a20f5d9571,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f726401898744e24c2bfab1ac52a0b0a7af1a4d405de648e249a46baada3844\"" Dec 13 01:55:57.537269 containerd[2044]: time="2024-12-13T01:55:57.536997124Z" level=info msg="CreateContainer within sandbox \"6f726401898744e24c2bfab1ac52a0b0a7af1a4d405de648e249a46baada3844\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:55:57.568581 containerd[2044]: time="2024-12-13T01:55:57.568334176Z" level=info msg="CreateContainer within sandbox \"6f726401898744e24c2bfab1ac52a0b0a7af1a4d405de648e249a46baada3844\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0995657642006590faf7a798b82022e3b952ecf8881d4024249ab4d51329a35a\"" Dec 13 01:55:57.572946 containerd[2044]: time="2024-12-13T01:55:57.571869376Z" level=info msg="StartContainer for \"0995657642006590faf7a798b82022e3b952ecf8881d4024249ab4d51329a35a\"" Dec 13 01:55:57.620965 systemd[1]: Started cri-containerd-0995657642006590faf7a798b82022e3b952ecf8881d4024249ab4d51329a35a.scope - libcontainer container 0995657642006590faf7a798b82022e3b952ecf8881d4024249ab4d51329a35a. Dec 13 01:55:57.683761 containerd[2044]: time="2024-12-13T01:55:57.683421857Z" level=info msg="StartContainer for \"0995657642006590faf7a798b82022e3b952ecf8881d4024249ab4d51329a35a\" returns successfully" Dec 13 01:55:59.359093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526108298.mount: Deactivated successfully. Dec 13 01:56:00.011692 containerd[2044]: time="2024-12-13T01:56:00.011100581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.013240 containerd[2044]: time="2024-12-13T01:56:00.012871013Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125972" Dec 13 01:56:00.015323 containerd[2044]: time="2024-12-13T01:56:00.015230813Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.020535 containerd[2044]: time="2024-12-13T01:56:00.020435813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.023193 containerd[2044]: time="2024-12-13T01:56:00.022371365Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.521057153s" Dec 13 01:56:00.023193 containerd[2044]: time="2024-12-13T01:56:00.022432637Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:56:00.027912 containerd[2044]: time="2024-12-13T01:56:00.027802265Z" level=info msg="CreateContainer within sandbox \"04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:56:00.057087 containerd[2044]: time="2024-12-13T01:56:00.056893061Z" level=info msg="CreateContainer within sandbox \"04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f\"" Dec 13 01:56:00.058733 containerd[2044]: time="2024-12-13T01:56:00.057731501Z" level=info msg="StartContainer for \"117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f\"" Dec 13 01:56:00.108967 systemd[1]: Started cri-containerd-117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f.scope - libcontainer container 117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f. Dec 13 01:56:00.158567 containerd[2044]: time="2024-12-13T01:56:00.158433809Z" level=info msg="StartContainer for \"117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f\" returns successfully" Dec 13 01:56:01.150755 kubelet[3281]: I1213 01:56:01.150683 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5h4bx" podStartSLOduration=6.150660138 podStartE2EDuration="6.150660138s" podCreationTimestamp="2024-12-13 01:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:58.129290643 +0000 UTC m=+15.399346373" watchObservedRunningTime="2024-12-13 01:56:01.150660138 +0000 UTC m=+18.420715352" Dec 13 01:56:05.035526 kubelet[3281]: I1213 01:56:05.035390 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-hbtzl" podStartSLOduration=7.508833789 podStartE2EDuration="10.035368834s" podCreationTimestamp="2024-12-13 01:55:55 +0000 UTC" firstStartedPulling="2024-12-13 01:55:57.498143152 +0000 UTC m=+14.768198318" lastFinishedPulling="2024-12-13 01:56:00.024678197 +0000 UTC m=+17.294733363" observedRunningTime="2024-12-13 01:56:01.150432486 +0000 UTC m=+18.420487664" watchObservedRunningTime="2024-12-13 01:56:05.035368834 +0000 UTC m=+22.305424024" Dec 13 01:56:05.036303 kubelet[3281]: I1213 01:56:05.035632 3281 topology_manager.go:215] "Topology Admit Handler" podUID="a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" podNamespace="calico-system" podName="calico-typha-5d4f89b5b9-mmc25" Dec 13 01:56:05.056210 systemd[1]: Created slice kubepods-besteffort-poda3d1eb13_2428_4586_93d9_9fa7d23cd9e2.slice - libcontainer container kubepods-besteffort-poda3d1eb13_2428_4586_93d9_9fa7d23cd9e2.slice. Dec 13 01:56:05.191543 kubelet[3281]: I1213 01:56:05.191462 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-typha-certs\") pod \"calico-typha-5d4f89b5b9-mmc25\" (UID: \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\") " pod="calico-system/calico-typha-5d4f89b5b9-mmc25" Dec 13 01:56:05.191730 kubelet[3281]: I1213 01:56:05.191549 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m849t\" (UniqueName: \"kubernetes.io/projected/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-kube-api-access-m849t\") pod \"calico-typha-5d4f89b5b9-mmc25\" (UID: \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\") " pod="calico-system/calico-typha-5d4f89b5b9-mmc25" Dec 13 01:56:05.191730 kubelet[3281]: I1213 01:56:05.191600 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-tigera-ca-bundle\") pod \"calico-typha-5d4f89b5b9-mmc25\" (UID: \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\") " pod="calico-system/calico-typha-5d4f89b5b9-mmc25" Dec 13 01:56:05.236834 kubelet[3281]: I1213 01:56:05.235711 3281 topology_manager.go:215] "Topology Admit Handler" podUID="46af5ed3-04d4-4283-aa3b-cd658fdc701a" podNamespace="calico-system" podName="calico-node-jlcvd" Dec 13 01:56:05.255597 systemd[1]: Created slice kubepods-besteffort-pod46af5ed3_04d4_4283_aa3b_cd658fdc701a.slice - libcontainer container kubepods-besteffort-pod46af5ed3_04d4_4283_aa3b_cd658fdc701a.slice. Dec 13 01:56:05.366665 containerd[2044]: time="2024-12-13T01:56:05.365788523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d4f89b5b9-mmc25,Uid:a3d1eb13-2428-4586-93d9-9fa7d23cd9e2,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:05.382026 kubelet[3281]: I1213 01:56:05.381301 3281 topology_manager.go:215] "Topology Admit Handler" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" podNamespace="calico-system" podName="csi-node-driver-v8ld8" Dec 13 01:56:05.383826 kubelet[3281]: E1213 01:56:05.381982 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:05.395835 kubelet[3281]: I1213 01:56:05.394834 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-flexvol-driver-host\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.396257 kubelet[3281]: I1213 01:56:05.396218 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-policysync\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.396417 kubelet[3281]: I1213 01:56:05.396392 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46af5ed3-04d4-4283-aa3b-cd658fdc701a-tigera-ca-bundle\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.396731 kubelet[3281]: I1213 01:56:05.396480 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-lib-calico\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.399936 kubelet[3281]: I1213 01:56:05.399370 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-lib-modules\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.399936 kubelet[3281]: I1213 01:56:05.399540 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-net-dir\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.399936 kubelet[3281]: I1213 01:56:05.399592 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-log-dir\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.399936 kubelet[3281]: I1213 01:56:05.399698 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-xtables-lock\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.399936 kubelet[3281]: I1213 01:56:05.399761 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46af5ed3-04d4-4283-aa3b-cd658fdc701a-node-certs\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.400306 kubelet[3281]: I1213 01:56:05.399806 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8wf\" (UniqueName: \"kubernetes.io/projected/46af5ed3-04d4-4283-aa3b-cd658fdc701a-kube-api-access-vd8wf\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.400306 kubelet[3281]: I1213 01:56:05.399870 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-run-calico\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.403767 kubelet[3281]: I1213 01:56:05.399908 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-bin-dir\") pod \"calico-node-jlcvd\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " pod="calico-system/calico-node-jlcvd" Dec 13 01:56:05.458397 containerd[2044]: time="2024-12-13T01:56:05.457880688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:05.458397 containerd[2044]: time="2024-12-13T01:56:05.458026884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:05.458397 containerd[2044]: time="2024-12-13T01:56:05.458095236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:05.459776 containerd[2044]: time="2024-12-13T01:56:05.458845452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:05.504712 kubelet[3281]: I1213 01:56:05.503581 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6-socket-dir\") pod \"csi-node-driver-v8ld8\" (UID: \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\") " pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:05.504712 kubelet[3281]: I1213 01:56:05.503697 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbzw\" (UniqueName: \"kubernetes.io/projected/fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6-kube-api-access-qlbzw\") pod \"csi-node-driver-v8ld8\" (UID: \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\") " pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:05.504712 kubelet[3281]: I1213 01:56:05.503799 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6-registration-dir\") pod \"csi-node-driver-v8ld8\" (UID: \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\") " pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:05.504712 kubelet[3281]: I1213 01:56:05.503894 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6-varrun\") pod \"csi-node-driver-v8ld8\" (UID: \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\") " pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:05.504712 kubelet[3281]: I1213 01:56:05.503983 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6-kubelet-dir\") pod \"csi-node-driver-v8ld8\" (UID: \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\") " pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:05.515522 kubelet[3281]: E1213 01:56:05.514535 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.517684 kubelet[3281]: W1213 01:56:05.515691 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.517684 kubelet[3281]: E1213 01:56:05.515881 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.519806 kubelet[3281]: E1213 01:56:05.518228 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.519806 kubelet[3281]: W1213 01:56:05.518260 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.520137 kubelet[3281]: E1213 01:56:05.519997 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.520590 kubelet[3281]: E1213 01:56:05.520530 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.521766 kubelet[3281]: W1213 01:56:05.520573 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.521766 kubelet[3281]: E1213 01:56:05.520759 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.522404 kubelet[3281]: E1213 01:56:05.522172 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.522561 kubelet[3281]: W1213 01:56:05.522401 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.524687 kubelet[3281]: E1213 01:56:05.524431 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.524687 kubelet[3281]: W1213 01:56:05.524476 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.524687 kubelet[3281]: E1213 01:56:05.524583 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.524687 kubelet[3281]: E1213 01:56:05.524623 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.525989 kubelet[3281]: E1213 01:56:05.525531 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.525989 kubelet[3281]: W1213 01:56:05.525598 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.525989 kubelet[3281]: E1213 01:56:05.525921 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.529466 kubelet[3281]: E1213 01:56:05.526361 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.529466 kubelet[3281]: W1213 01:56:05.526382 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.529466 kubelet[3281]: E1213 01:56:05.526716 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.529466 kubelet[3281]: W1213 01:56:05.526733 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.529466 kubelet[3281]: E1213 01:56:05.526991 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.529466 kubelet[3281]: W1213 01:56:05.527007 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.529466 kubelet[3281]: E1213 01:56:05.527266 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.529466 kubelet[3281]: W1213 01:56:05.527284 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.529466 kubelet[3281]: E1213 01:56:05.527583 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.529466 kubelet[3281]: W1213 01:56:05.527601 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.529466 kubelet[3281]: E1213 01:56:05.528021 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.531230 kubelet[3281]: W1213 01:56:05.528041 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.528069 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.528393 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.531230 kubelet[3281]: W1213 01:56:05.528411 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.528434 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.528797 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.531230 kubelet[3281]: W1213 01:56:05.528817 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.528841 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.529116 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.531230 kubelet[3281]: E1213 01:56:05.529149 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.529177 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.529229 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.529256 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.529710 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.534728 kubelet[3281]: W1213 01:56:05.529733 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.529761 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.530146 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.534728 kubelet[3281]: W1213 01:56:05.530163 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.530186 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.534728 kubelet[3281]: E1213 01:56:05.530499 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.531969 systemd[1]: Started cri-containerd-cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8.scope - libcontainer container cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8. Dec 13 01:56:05.535529 kubelet[3281]: W1213 01:56:05.530514 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.535529 kubelet[3281]: E1213 01:56:05.530536 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.535529 kubelet[3281]: E1213 01:56:05.530954 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.535529 kubelet[3281]: W1213 01:56:05.530973 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.535529 kubelet[3281]: E1213 01:56:05.530996 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.535529 kubelet[3281]: E1213 01:56:05.531272 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.535529 kubelet[3281]: W1213 01:56:05.531287 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.535529 kubelet[3281]: E1213 01:56:05.531306 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.535529 kubelet[3281]: E1213 01:56:05.531575 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.535529 kubelet[3281]: W1213 01:56:05.531592 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.538437 kubelet[3281]: E1213 01:56:05.531610 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.538437 kubelet[3281]: E1213 01:56:05.532938 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.538437 kubelet[3281]: W1213 01:56:05.532963 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.538437 kubelet[3281]: E1213 01:56:05.532995 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.538437 kubelet[3281]: E1213 01:56:05.534699 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.538437 kubelet[3281]: W1213 01:56:05.536836 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.538437 kubelet[3281]: E1213 01:56:05.537682 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.553833 kubelet[3281]: E1213 01:56:05.553781 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.553833 kubelet[3281]: W1213 01:56:05.553820 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.554130 kubelet[3281]: E1213 01:56:05.553855 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.577810 kubelet[3281]: E1213 01:56:05.577600 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.577810 kubelet[3281]: W1213 01:56:05.577656 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.577810 kubelet[3281]: E1213 01:56:05.577710 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.606733 kubelet[3281]: E1213 01:56:05.606675 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.606733 kubelet[3281]: W1213 01:56:05.606715 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.607070 kubelet[3281]: E1213 01:56:05.606751 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.607252 kubelet[3281]: E1213 01:56:05.607195 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.607252 kubelet[3281]: W1213 01:56:05.607228 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.607252 kubelet[3281]: E1213 01:56:05.607267 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.609258 kubelet[3281]: E1213 01:56:05.609046 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.609258 kubelet[3281]: W1213 01:56:05.609103 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.609258 kubelet[3281]: E1213 01:56:05.609172 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.610585 kubelet[3281]: E1213 01:56:05.610031 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.610585 kubelet[3281]: W1213 01:56:05.610080 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.610585 kubelet[3281]: E1213 01:56:05.610257 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.611671 kubelet[3281]: E1213 01:56:05.611434 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.611671 kubelet[3281]: W1213 01:56:05.611505 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.611671 kubelet[3281]: E1213 01:56:05.611575 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.614155 kubelet[3281]: E1213 01:56:05.613586 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.614155 kubelet[3281]: W1213 01:56:05.613620 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.614155 kubelet[3281]: E1213 01:56:05.613727 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.615501 kubelet[3281]: E1213 01:56:05.614827 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.615501 kubelet[3281]: W1213 01:56:05.615078 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.615501 kubelet[3281]: E1213 01:56:05.615147 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.616920 kubelet[3281]: E1213 01:56:05.616234 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.616920 kubelet[3281]: W1213 01:56:05.616264 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.617907 kubelet[3281]: E1213 01:56:05.617403 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.617907 kubelet[3281]: W1213 01:56:05.617448 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.617907 kubelet[3281]: E1213 01:56:05.617407 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.617907 kubelet[3281]: E1213 01:56:05.617760 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.619224 kubelet[3281]: E1213 01:56:05.619090 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.619377 kubelet[3281]: W1213 01:56:05.619350 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.620926 kubelet[3281]: E1213 01:56:05.620733 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.621519 kubelet[3281]: E1213 01:56:05.621342 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.621519 kubelet[3281]: W1213 01:56:05.621369 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.621519 kubelet[3281]: E1213 01:56:05.621436 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.623261 kubelet[3281]: E1213 01:56:05.622744 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.623261 kubelet[3281]: W1213 01:56:05.622774 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.623261 kubelet[3281]: E1213 01:56:05.622850 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.624720 kubelet[3281]: E1213 01:56:05.624222 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.624720 kubelet[3281]: W1213 01:56:05.624363 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.624720 kubelet[3281]: E1213 01:56:05.624431 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.625719 kubelet[3281]: E1213 01:56:05.625629 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.626244 kubelet[3281]: W1213 01:56:05.625793 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.626244 kubelet[3281]: E1213 01:56:05.625860 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.627244 kubelet[3281]: E1213 01:56:05.626925 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.627244 kubelet[3281]: W1213 01:56:05.626991 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.627244 kubelet[3281]: E1213 01:56:05.627051 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.628061 kubelet[3281]: E1213 01:56:05.627877 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.628061 kubelet[3281]: W1213 01:56:05.627904 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.628061 kubelet[3281]: E1213 01:56:05.627962 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.628985 kubelet[3281]: E1213 01:56:05.628821 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.628985 kubelet[3281]: W1213 01:56:05.628846 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.629379 kubelet[3281]: E1213 01:56:05.629356 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.629778 kubelet[3281]: E1213 01:56:05.629509 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.629778 kubelet[3281]: W1213 01:56:05.629625 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.630103 kubelet[3281]: E1213 01:56:05.629920 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.630856 kubelet[3281]: E1213 01:56:05.630814 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.630856 kubelet[3281]: W1213 01:56:05.630849 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.631167 kubelet[3281]: E1213 01:56:05.631074 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.632092 kubelet[3281]: E1213 01:56:05.632067 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.632313 kubelet[3281]: W1213 01:56:05.632204 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.632313 kubelet[3281]: E1213 01:56:05.632270 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.635097 kubelet[3281]: E1213 01:56:05.634837 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.635097 kubelet[3281]: W1213 01:56:05.634872 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.635097 kubelet[3281]: E1213 01:56:05.634949 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.635630 kubelet[3281]: E1213 01:56:05.635518 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.635630 kubelet[3281]: W1213 01:56:05.635543 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.635630 kubelet[3281]: E1213 01:56:05.635594 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.637270 kubelet[3281]: E1213 01:56:05.637035 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.637270 kubelet[3281]: W1213 01:56:05.637068 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.637905 kubelet[3281]: E1213 01:56:05.637846 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.638614 kubelet[3281]: E1213 01:56:05.638582 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.638827 kubelet[3281]: W1213 01:56:05.638697 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.639769 kubelet[3281]: E1213 01:56:05.639193 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.639769 kubelet[3281]: E1213 01:56:05.639567 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.639769 kubelet[3281]: W1213 01:56:05.639590 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.639769 kubelet[3281]: E1213 01:56:05.639619 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.663450 kubelet[3281]: E1213 01:56:05.662332 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:05.663450 kubelet[3281]: W1213 01:56:05.662373 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:05.663450 kubelet[3281]: E1213 01:56:05.662412 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:05.717722 containerd[2044]: time="2024-12-13T01:56:05.717341329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d4f89b5b9-mmc25,Uid:a3d1eb13-2428-4586-93d9-9fa7d23cd9e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\"" Dec 13 01:56:05.722559 containerd[2044]: time="2024-12-13T01:56:05.722478649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:56:05.871031 containerd[2044]: time="2024-12-13T01:56:05.870873842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jlcvd,Uid:46af5ed3-04d4-4283-aa3b-cd658fdc701a,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:05.934822 containerd[2044]: time="2024-12-13T01:56:05.934322570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:05.934822 containerd[2044]: time="2024-12-13T01:56:05.934426586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:05.934822 containerd[2044]: time="2024-12-13T01:56:05.934471682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:05.936013 containerd[2044]: time="2024-12-13T01:56:05.934972178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:05.986564 systemd[1]: Started cri-containerd-139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4.scope - libcontainer container 139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4. Dec 13 01:56:06.140137 containerd[2044]: time="2024-12-13T01:56:06.136466399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jlcvd,Uid:46af5ed3-04d4-4283-aa3b-cd658fdc701a,Namespace:calico-system,Attempt:0,} returns sandbox id \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\"" Dec 13 01:56:06.977504 kubelet[3281]: E1213 01:56:06.977424 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:07.087536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991870506.mount: Deactivated successfully. Dec 13 01:56:08.090398 containerd[2044]: time="2024-12-13T01:56:08.088732429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:08.090398 containerd[2044]: time="2024-12-13T01:56:08.090291697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:56:08.093223 containerd[2044]: time="2024-12-13T01:56:08.093099049Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:08.097939 containerd[2044]: time="2024-12-13T01:56:08.097875721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:08.101157 containerd[2044]: time="2024-12-13T01:56:08.100932841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.378376444s" Dec 13 01:56:08.101157 containerd[2044]: time="2024-12-13T01:56:08.101021521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:56:08.103832 containerd[2044]: time="2024-12-13T01:56:08.103172353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:56:08.144330 containerd[2044]: time="2024-12-13T01:56:08.144266005Z" level=info msg="CreateContainer within sandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:56:08.176251 containerd[2044]: time="2024-12-13T01:56:08.176174617Z" level=info msg="CreateContainer within sandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\"" Dec 13 01:56:08.176947 containerd[2044]: time="2024-12-13T01:56:08.176900053Z" level=info msg="StartContainer for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\"" Dec 13 01:56:08.237965 systemd[1]: Started cri-containerd-4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461.scope - libcontainer container 4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461. Dec 13 01:56:08.318364 containerd[2044]: time="2024-12-13T01:56:08.318268802Z" level=info msg="StartContainer for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" returns successfully" Dec 13 01:56:08.979418 kubelet[3281]: E1213 01:56:08.978905 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:09.239186 kubelet[3281]: E1213 01:56:09.239033 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.239186 kubelet[3281]: W1213 01:56:09.239076 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.239186 kubelet[3281]: E1213 01:56:09.239112 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.241536 kubelet[3281]: E1213 01:56:09.239807 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.241536 kubelet[3281]: W1213 01:56:09.239831 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.241536 kubelet[3281]: E1213 01:56:09.239858 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.241536 kubelet[3281]: E1213 01:56:09.240200 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.241536 kubelet[3281]: W1213 01:56:09.240218 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.241536 kubelet[3281]: E1213 01:56:09.240238 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.243486 kubelet[3281]: E1213 01:56:09.243428 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.243486 kubelet[3281]: W1213 01:56:09.243482 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.243875 kubelet[3281]: E1213 01:56:09.243518 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.244197 kubelet[3281]: E1213 01:56:09.244166 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.244264 kubelet[3281]: W1213 01:56:09.244196 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.244264 kubelet[3281]: E1213 01:56:09.244230 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.244603 kubelet[3281]: E1213 01:56:09.244575 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.244712 kubelet[3281]: W1213 01:56:09.244601 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.244712 kubelet[3281]: E1213 01:56:09.244623 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.245179 kubelet[3281]: E1213 01:56:09.245149 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.245250 kubelet[3281]: W1213 01:56:09.245188 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.245250 kubelet[3281]: E1213 01:56:09.245214 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.245618 kubelet[3281]: E1213 01:56:09.245588 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.245618 kubelet[3281]: W1213 01:56:09.245651 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.245618 kubelet[3281]: E1213 01:56:09.245711 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.245618 kubelet[3281]: E1213 01:56:09.246956 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.245618 kubelet[3281]: W1213 01:56:09.246973 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.245618 kubelet[3281]: E1213 01:56:09.246993 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.247457 kubelet[3281]: E1213 01:56:09.247323 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.247457 kubelet[3281]: W1213 01:56:09.247339 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.247457 kubelet[3281]: E1213 01:56:09.247359 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.247771 kubelet[3281]: E1213 01:56:09.247743 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.247836 kubelet[3281]: W1213 01:56:09.247769 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.247836 kubelet[3281]: E1213 01:56:09.247791 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.248166 kubelet[3281]: E1213 01:56:09.248137 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.248166 kubelet[3281]: W1213 01:56:09.248163 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.248294 kubelet[3281]: E1213 01:56:09.248186 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.250050 kubelet[3281]: E1213 01:56:09.248603 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.250050 kubelet[3281]: W1213 01:56:09.248698 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.250050 kubelet[3281]: E1213 01:56:09.248727 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.250050 kubelet[3281]: E1213 01:56:09.249509 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.250050 kubelet[3281]: W1213 01:56:09.249531 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.250050 kubelet[3281]: E1213 01:56:09.249557 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.250050 kubelet[3281]: E1213 01:56:09.249943 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.250050 kubelet[3281]: W1213 01:56:09.249964 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.250050 kubelet[3281]: E1213 01:56:09.249988 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.255178 kubelet[3281]: E1213 01:56:09.255125 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.255626 kubelet[3281]: W1213 01:56:09.255194 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.255626 kubelet[3281]: E1213 01:56:09.255231 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.256933 kubelet[3281]: E1213 01:56:09.256888 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.257358 kubelet[3281]: W1213 01:56:09.257124 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.257358 kubelet[3281]: E1213 01:56:09.257204 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.257813 kubelet[3281]: E1213 01:56:09.257704 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.257813 kubelet[3281]: W1213 01:56:09.257730 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.257813 kubelet[3281]: E1213 01:56:09.257769 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.259735 kubelet[3281]: E1213 01:56:09.258950 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.259735 kubelet[3281]: W1213 01:56:09.258988 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.259735 kubelet[3281]: E1213 01:56:09.259245 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.259735 kubelet[3281]: E1213 01:56:09.259448 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.259735 kubelet[3281]: W1213 01:56:09.259481 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.259735 kubelet[3281]: E1213 01:56:09.259532 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.261697 kubelet[3281]: E1213 01:56:09.260568 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.261697 kubelet[3281]: W1213 01:56:09.260724 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.261697 kubelet[3281]: E1213 01:56:09.261092 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.261697 kubelet[3281]: W1213 01:56:09.261111 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.261697 kubelet[3281]: E1213 01:56:09.261235 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.261697 kubelet[3281]: E1213 01:56:09.261285 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.261697 kubelet[3281]: E1213 01:56:09.261384 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.261697 kubelet[3281]: W1213 01:56:09.261400 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.261697 kubelet[3281]: E1213 01:56:09.261422 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.263902 kubelet[3281]: E1213 01:56:09.261735 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.263902 kubelet[3281]: W1213 01:56:09.261772 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.263902 kubelet[3281]: E1213 01:56:09.261796 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.263902 kubelet[3281]: E1213 01:56:09.262122 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.263902 kubelet[3281]: W1213 01:56:09.262140 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.263902 kubelet[3281]: E1213 01:56:09.262174 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.263902 kubelet[3281]: E1213 01:56:09.262887 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.263902 kubelet[3281]: W1213 01:56:09.262914 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.263902 kubelet[3281]: E1213 01:56:09.262955 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.264662 kubelet[3281]: E1213 01:56:09.264367 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.264662 kubelet[3281]: W1213 01:56:09.264416 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.264662 kubelet[3281]: E1213 01:56:09.264536 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.264956 kubelet[3281]: E1213 01:56:09.264926 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.265088 kubelet[3281]: W1213 01:56:09.264955 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.265088 kubelet[3281]: E1213 01:56:09.265201 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.266152 kubelet[3281]: E1213 01:56:09.266080 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.266152 kubelet[3281]: W1213 01:56:09.266115 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.266369 kubelet[3281]: E1213 01:56:09.266156 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.268255 kubelet[3281]: E1213 01:56:09.268145 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.268255 kubelet[3281]: W1213 01:56:09.268247 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.268444 kubelet[3281]: E1213 01:56:09.268385 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.269334 kubelet[3281]: E1213 01:56:09.269242 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.269334 kubelet[3281]: W1213 01:56:09.269276 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.269334 kubelet[3281]: E1213 01:56:09.269318 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.270182 kubelet[3281]: E1213 01:56:09.270111 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.270182 kubelet[3281]: W1213 01:56:09.270155 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.270182 kubelet[3281]: E1213 01:56:09.270197 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.271370 kubelet[3281]: E1213 01:56:09.271331 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:56:09.271370 kubelet[3281]: W1213 01:56:09.271365 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:56:09.271579 kubelet[3281]: E1213 01:56:09.271395 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:56:09.410944 containerd[2044]: time="2024-12-13T01:56:09.410853027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.413600 containerd[2044]: time="2024-12-13T01:56:09.413522259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:56:09.417813 containerd[2044]: time="2024-12-13T01:56:09.417711051Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.427199 containerd[2044]: time="2024-12-13T01:56:09.427091931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.433696 containerd[2044]: time="2024-12-13T01:56:09.432743487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.329474222s" Dec 13 01:56:09.433696 containerd[2044]: time="2024-12-13T01:56:09.432822531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:56:09.440368 containerd[2044]: time="2024-12-13T01:56:09.440318247Z" level=info msg="CreateContainer within sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:56:09.471073 containerd[2044]: time="2024-12-13T01:56:09.470991952Z" level=info msg="CreateContainer within sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4\"" Dec 13 01:56:09.474138 containerd[2044]: time="2024-12-13T01:56:09.473540056Z" level=info msg="StartContainer for \"e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4\"" Dec 13 01:56:09.547008 systemd[1]: Started cri-containerd-e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4.scope - libcontainer container e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4. Dec 13 01:56:09.611502 containerd[2044]: time="2024-12-13T01:56:09.610740448Z" level=info msg="StartContainer for \"e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4\" returns successfully" Dec 13 01:56:09.659197 systemd[1]: cri-containerd-e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4.scope: Deactivated successfully. Dec 13 01:56:09.897716 containerd[2044]: time="2024-12-13T01:56:09.897489666Z" level=info msg="shim disconnected" id=e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4 namespace=k8s.io Dec 13 01:56:09.897716 containerd[2044]: time="2024-12-13T01:56:09.897569502Z" level=warning msg="cleaning up after shim disconnected" id=e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4 namespace=k8s.io Dec 13 01:56:09.897716 containerd[2044]: time="2024-12-13T01:56:09.897589314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:10.118241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4-rootfs.mount: Deactivated successfully. Dec 13 01:56:10.179378 kubelet[3281]: I1213 01:56:10.179240 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:10.182840 containerd[2044]: time="2024-12-13T01:56:10.182727471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:56:10.211498 kubelet[3281]: I1213 01:56:10.211337 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d4f89b5b9-mmc25" podStartSLOduration=2.829537815 podStartE2EDuration="5.211184643s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="2024-12-13 01:56:05.720818713 +0000 UTC m=+22.990873879" lastFinishedPulling="2024-12-13 01:56:08.102465553 +0000 UTC m=+25.372520707" observedRunningTime="2024-12-13 01:56:09.196829258 +0000 UTC m=+26.466884436" watchObservedRunningTime="2024-12-13 01:56:10.211184643 +0000 UTC m=+27.481239845" Dec 13 01:56:10.979138 kubelet[3281]: E1213 01:56:10.977465 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:12.980606 kubelet[3281]: E1213 01:56:12.979243 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:14.039936 containerd[2044]: time="2024-12-13T01:56:14.039847278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:14.041480 containerd[2044]: time="2024-12-13T01:56:14.041420058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:56:14.044707 containerd[2044]: time="2024-12-13T01:56:14.044013378Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:14.049981 containerd[2044]: time="2024-12-13T01:56:14.049767846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:14.051964 containerd[2044]: time="2024-12-13T01:56:14.051563970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.868751311s" Dec 13 01:56:14.051964 containerd[2044]: time="2024-12-13T01:56:14.051719478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:56:14.058358 containerd[2044]: time="2024-12-13T01:56:14.058207002Z" level=info msg="CreateContainer within sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:56:14.082058 containerd[2044]: time="2024-12-13T01:56:14.081989658Z" level=info msg="CreateContainer within sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5\"" Dec 13 01:56:14.084943 containerd[2044]: time="2024-12-13T01:56:14.083987166Z" level=info msg="StartContainer for \"4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5\"" Dec 13 01:56:14.151003 systemd[1]: Started cri-containerd-4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5.scope - libcontainer container 4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5. Dec 13 01:56:14.210501 containerd[2044]: time="2024-12-13T01:56:14.210408511Z" level=info msg="StartContainer for \"4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5\" returns successfully" Dec 13 01:56:14.980677 kubelet[3281]: E1213 01:56:14.979056 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:15.094512 containerd[2044]: time="2024-12-13T01:56:15.094441820Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:56:15.100922 systemd[1]: cri-containerd-4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5.scope: Deactivated successfully. Dec 13 01:56:15.142804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5-rootfs.mount: Deactivated successfully. Dec 13 01:56:15.160629 kubelet[3281]: I1213 01:56:15.160575 3281 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:56:15.266091 kubelet[3281]: I1213 01:56:15.265193 3281 topology_manager.go:215] "Topology Admit Handler" podUID="92de6f5a-467f-4f9f-aa0a-c0c83f71a31b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nxrmk" Dec 13 01:56:15.269429 kubelet[3281]: I1213 01:56:15.267969 3281 topology_manager.go:215] "Topology Admit Handler" podUID="1fd116f0-a6fd-4513-9f18-4fa2b846559e" podNamespace="calico-system" podName="calico-kube-controllers-56cc9599d8-srdrd" Dec 13 01:56:15.274307 kubelet[3281]: I1213 01:56:15.274246 3281 topology_manager.go:215] "Topology Admit Handler" podUID="8404c66f-b027-4878-814b-c22b0f9622a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tw7vs" Dec 13 01:56:15.277292 kubelet[3281]: I1213 01:56:15.276980 3281 topology_manager.go:215] "Topology Admit Handler" podUID="9859ed10-6294-4331-aad0-3ead71dc6b50" podNamespace="calico-apiserver" podName="calico-apiserver-f77bf5bb4-59lz9" Dec 13 01:56:15.286690 kubelet[3281]: I1213 01:56:15.277833 3281 topology_manager.go:215] "Topology Admit Handler" podUID="939dd521-1757-4ed9-83b7-813ae796a6af" podNamespace="calico-apiserver" podName="calico-apiserver-f77bf5bb4-pchds" Dec 13 01:56:15.286690 kubelet[3281]: W1213 01:56:15.284230 3281 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:56:15.286690 kubelet[3281]: E1213 01:56:15.284272 3281 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:56:15.304555 kubelet[3281]: W1213 01:56:15.293189 3281 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:56:15.304555 kubelet[3281]: E1213 01:56:15.293245 3281 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:56:15.304555 kubelet[3281]: W1213 01:56:15.293188 3281 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:56:15.304555 kubelet[3281]: E1213 01:56:15.293283 3281 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-88" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-88' and this object Dec 13 01:56:15.297340 systemd[1]: Created slice kubepods-besteffort-pod1fd116f0_a6fd_4513_9f18_4fa2b846559e.slice - libcontainer container kubepods-besteffort-pod1fd116f0_a6fd_4513_9f18_4fa2b846559e.slice. Dec 13 01:56:15.314061 kubelet[3281]: I1213 01:56:15.313263 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkdbg\" (UniqueName: \"kubernetes.io/projected/1fd116f0-a6fd-4513-9f18-4fa2b846559e-kube-api-access-wkdbg\") pod \"calico-kube-controllers-56cc9599d8-srdrd\" (UID: \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\") " pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" Dec 13 01:56:15.314061 kubelet[3281]: I1213 01:56:15.313538 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/939dd521-1757-4ed9-83b7-813ae796a6af-calico-apiserver-certs\") pod \"calico-apiserver-f77bf5bb4-pchds\" (UID: \"939dd521-1757-4ed9-83b7-813ae796a6af\") " pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" Dec 13 01:56:15.314061 kubelet[3281]: I1213 01:56:15.313733 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77xx4\" (UniqueName: \"kubernetes.io/projected/9859ed10-6294-4331-aad0-3ead71dc6b50-kube-api-access-77xx4\") pod \"calico-apiserver-f77bf5bb4-59lz9\" (UID: \"9859ed10-6294-4331-aad0-3ead71dc6b50\") " pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" Dec 13 01:56:15.342574 kubelet[3281]: I1213 01:56:15.314948 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7b6\" (UniqueName: \"kubernetes.io/projected/8404c66f-b027-4878-814b-c22b0f9622a6-kube-api-access-dh7b6\") pod \"coredns-7db6d8ff4d-tw7vs\" (UID: \"8404c66f-b027-4878-814b-c22b0f9622a6\") " pod="kube-system/coredns-7db6d8ff4d-tw7vs" Dec 13 01:56:15.342574 kubelet[3281]: I1213 01:56:15.315138 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fd116f0-a6fd-4513-9f18-4fa2b846559e-tigera-ca-bundle\") pod \"calico-kube-controllers-56cc9599d8-srdrd\" (UID: \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\") " pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" Dec 13 01:56:15.342574 kubelet[3281]: I1213 01:56:15.315298 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6vp\" (UniqueName: \"kubernetes.io/projected/939dd521-1757-4ed9-83b7-813ae796a6af-kube-api-access-rv6vp\") pod \"calico-apiserver-f77bf5bb4-pchds\" (UID: \"939dd521-1757-4ed9-83b7-813ae796a6af\") " pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" Dec 13 01:56:15.342574 kubelet[3281]: I1213 01:56:15.315342 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwj8\" (UniqueName: \"kubernetes.io/projected/92de6f5a-467f-4f9f-aa0a-c0c83f71a31b-kube-api-access-cvwj8\") pod \"coredns-7db6d8ff4d-nxrmk\" (UID: \"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b\") " pod="kube-system/coredns-7db6d8ff4d-nxrmk" Dec 13 01:56:15.342574 kubelet[3281]: I1213 01:56:15.315727 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8404c66f-b027-4878-814b-c22b0f9622a6-config-volume\") pod \"coredns-7db6d8ff4d-tw7vs\" (UID: \"8404c66f-b027-4878-814b-c22b0f9622a6\") " pod="kube-system/coredns-7db6d8ff4d-tw7vs" Dec 13 01:56:15.319340 systemd[1]: Created slice kubepods-burstable-pod92de6f5a_467f_4f9f_aa0a_c0c83f71a31b.slice - libcontainer container kubepods-burstable-pod92de6f5a_467f_4f9f_aa0a_c0c83f71a31b.slice. Dec 13 01:56:15.343326 kubelet[3281]: I1213 01:56:15.316679 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92de6f5a-467f-4f9f-aa0a-c0c83f71a31b-config-volume\") pod \"coredns-7db6d8ff4d-nxrmk\" (UID: \"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b\") " pod="kube-system/coredns-7db6d8ff4d-nxrmk" Dec 13 01:56:15.343326 kubelet[3281]: I1213 01:56:15.316731 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9859ed10-6294-4331-aad0-3ead71dc6b50-calico-apiserver-certs\") pod \"calico-apiserver-f77bf5bb4-59lz9\" (UID: \"9859ed10-6294-4331-aad0-3ead71dc6b50\") " pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" Dec 13 01:56:15.339296 systemd[1]: Created slice kubepods-burstable-pod8404c66f_b027_4878_814b_c22b0f9622a6.slice - libcontainer container kubepods-burstable-pod8404c66f_b027_4878_814b_c22b0f9622a6.slice. Dec 13 01:56:15.364755 systemd[1]: Created slice kubepods-besteffort-pod939dd521_1757_4ed9_83b7_813ae796a6af.slice - libcontainer container kubepods-besteffort-pod939dd521_1757_4ed9_83b7_813ae796a6af.slice. Dec 13 01:56:15.381135 systemd[1]: Created slice kubepods-besteffort-pod9859ed10_6294_4331_aad0_3ead71dc6b50.slice - libcontainer container kubepods-besteffort-pod9859ed10_6294_4331_aad0_3ead71dc6b50.slice. Dec 13 01:56:15.611438 containerd[2044]: time="2024-12-13T01:56:15.611223994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc9599d8-srdrd,Uid:1fd116f0-a6fd-4513-9f18-4fa2b846559e,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:16.062742 containerd[2044]: time="2024-12-13T01:56:16.062226908Z" level=info msg="shim disconnected" id=4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5 namespace=k8s.io Dec 13 01:56:16.062742 containerd[2044]: time="2024-12-13T01:56:16.062307896Z" level=warning msg="cleaning up after shim disconnected" id=4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5 namespace=k8s.io Dec 13 01:56:16.062742 containerd[2044]: time="2024-12-13T01:56:16.062332688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:16.186570 containerd[2044]: time="2024-12-13T01:56:16.186483345Z" level=error msg="Failed to destroy network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.188628 containerd[2044]: time="2024-12-13T01:56:16.187093377Z" level=error msg="encountered an error cleaning up failed sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.188628 containerd[2044]: time="2024-12-13T01:56:16.187173045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc9599d8-srdrd,Uid:1fd116f0-a6fd-4513-9f18-4fa2b846559e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.188835 kubelet[3281]: E1213 01:56:16.187475 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.188835 kubelet[3281]: E1213 01:56:16.187570 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" Dec 13 01:56:16.188835 kubelet[3281]: E1213 01:56:16.187607 3281 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" Dec 13 01:56:16.189693 kubelet[3281]: E1213 01:56:16.187691 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56cc9599d8-srdrd_calico-system(1fd116f0-a6fd-4513-9f18-4fa2b846559e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56cc9599d8-srdrd_calico-system(1fd116f0-a6fd-4513-9f18-4fa2b846559e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" podUID="1fd116f0-a6fd-4513-9f18-4fa2b846559e" Dec 13 01:56:16.193737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2-shm.mount: Deactivated successfully. Dec 13 01:56:16.218149 kubelet[3281]: I1213 01:56:16.216868 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:16.218570 containerd[2044]: time="2024-12-13T01:56:16.217798533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:56:16.220083 containerd[2044]: time="2024-12-13T01:56:16.219029601Z" level=info msg="StopPodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\"" Dec 13 01:56:16.220083 containerd[2044]: time="2024-12-13T01:56:16.219332025Z" level=info msg="Ensure that sandbox 348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2 in task-service has been cleanup successfully" Dec 13 01:56:16.342108 containerd[2044]: time="2024-12-13T01:56:16.341115694Z" level=error msg="StopPodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" failed" error="failed to destroy network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.342270 kubelet[3281]: E1213 01:56:16.341600 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:16.342270 kubelet[3281]: E1213 01:56:16.341704 3281 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2"} Dec 13 01:56:16.342270 kubelet[3281]: E1213 01:56:16.341790 3281 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:16.342270 kubelet[3281]: E1213 01:56:16.341833 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" podUID="1fd116f0-a6fd-4513-9f18-4fa2b846559e" Dec 13 01:56:16.418579 kubelet[3281]: E1213 01:56:16.418530 3281 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:16.418763 kubelet[3281]: E1213 01:56:16.418536 3281 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:16.418763 kubelet[3281]: E1213 01:56:16.418729 3281 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92de6f5a-467f-4f9f-aa0a-c0c83f71a31b-config-volume podName:92de6f5a-467f-4f9f-aa0a-c0c83f71a31b nodeName:}" failed. No retries permitted until 2024-12-13 01:56:16.918613242 +0000 UTC m=+34.188668408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/92de6f5a-467f-4f9f-aa0a-c0c83f71a31b-config-volume") pod "coredns-7db6d8ff4d-nxrmk" (UID: "92de6f5a-467f-4f9f-aa0a-c0c83f71a31b") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:16.420694 kubelet[3281]: E1213 01:56:16.418762 3281 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8404c66f-b027-4878-814b-c22b0f9622a6-config-volume podName:8404c66f-b027-4878-814b-c22b0f9622a6 nodeName:}" failed. No retries permitted until 2024-12-13 01:56:16.918745926 +0000 UTC m=+34.188801092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8404c66f-b027-4878-814b-c22b0f9622a6-config-volume") pod "coredns-7db6d8ff4d-tw7vs" (UID: "8404c66f-b027-4878-814b-c22b0f9622a6") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:16.580696 containerd[2044]: time="2024-12-13T01:56:16.580353035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-pchds,Uid:939dd521-1757-4ed9-83b7-813ae796a6af,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:56:16.590775 containerd[2044]: time="2024-12-13T01:56:16.590715107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-59lz9,Uid:9859ed10-6294-4331-aad0-3ead71dc6b50,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:56:16.760842 containerd[2044]: time="2024-12-13T01:56:16.760759908Z" level=error msg="Failed to destroy network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.761394 containerd[2044]: time="2024-12-13T01:56:16.761332236Z" level=error msg="encountered an error cleaning up failed sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.761529 containerd[2044]: time="2024-12-13T01:56:16.761429136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-pchds,Uid:939dd521-1757-4ed9-83b7-813ae796a6af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.761918 kubelet[3281]: E1213 01:56:16.761843 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.762030 kubelet[3281]: E1213 01:56:16.761932 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" Dec 13 01:56:16.762030 kubelet[3281]: E1213 01:56:16.761970 3281 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" Dec 13 01:56:16.762162 kubelet[3281]: E1213 01:56:16.762041 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f77bf5bb4-pchds_calico-apiserver(939dd521-1757-4ed9-83b7-813ae796a6af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f77bf5bb4-pchds_calico-apiserver(939dd521-1757-4ed9-83b7-813ae796a6af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" podUID="939dd521-1757-4ed9-83b7-813ae796a6af" Dec 13 01:56:16.774581 containerd[2044]: time="2024-12-13T01:56:16.774249804Z" level=error msg="Failed to destroy network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.774909 containerd[2044]: time="2024-12-13T01:56:16.774854916Z" level=error msg="encountered an error cleaning up failed sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.774988 containerd[2044]: time="2024-12-13T01:56:16.774942564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-59lz9,Uid:9859ed10-6294-4331-aad0-3ead71dc6b50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.775690 kubelet[3281]: E1213 01:56:16.775225 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:16.775690 kubelet[3281]: E1213 01:56:16.775297 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" Dec 13 01:56:16.775690 kubelet[3281]: E1213 01:56:16.775329 3281 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" Dec 13 01:56:16.775899 kubelet[3281]: E1213 01:56:16.775401 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f77bf5bb4-59lz9_calico-apiserver(9859ed10-6294-4331-aad0-3ead71dc6b50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f77bf5bb4-59lz9_calico-apiserver(9859ed10-6294-4331-aad0-3ead71dc6b50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" podUID="9859ed10-6294-4331-aad0-3ead71dc6b50" Dec 13 01:56:16.992360 systemd[1]: Created slice kubepods-besteffort-podfdcc0f22_5979_4fd2_8ab6_4d3cbd4e07e6.slice - libcontainer container kubepods-besteffort-podfdcc0f22_5979_4fd2_8ab6_4d3cbd4e07e6.slice. Dec 13 01:56:17.001188 containerd[2044]: time="2024-12-13T01:56:17.000855513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8ld8,Uid:fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6,Namespace:calico-system,Attempt:0,}" Dec 13 01:56:17.131287 containerd[2044]: time="2024-12-13T01:56:17.129831754Z" level=error msg="Failed to destroy network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.131287 containerd[2044]: time="2024-12-13T01:56:17.131016610Z" level=error msg="encountered an error cleaning up failed sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.131758 containerd[2044]: time="2024-12-13T01:56:17.131299270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8ld8,Uid:fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.131863 kubelet[3281]: E1213 01:56:17.131672 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.131863 kubelet[3281]: E1213 01:56:17.131746 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:17.131863 kubelet[3281]: E1213 01:56:17.131784 3281 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8ld8" Dec 13 01:56:17.133700 kubelet[3281]: E1213 01:56:17.131844 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v8ld8_calico-system(fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v8ld8_calico-system(fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:17.133932 containerd[2044]: time="2024-12-13T01:56:17.133175746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxrmk,Uid:92de6f5a-467f-4f9f-aa0a-c0c83f71a31b,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:17.159463 containerd[2044]: time="2024-12-13T01:56:17.159026026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tw7vs,Uid:8404c66f-b027-4878-814b-c22b0f9622a6,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:17.236449 kubelet[3281]: I1213 01:56:17.236058 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:17.239541 containerd[2044]: time="2024-12-13T01:56:17.238974550Z" level=info msg="StopPodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\"" Dec 13 01:56:17.239541 containerd[2044]: time="2024-12-13T01:56:17.239354830Z" level=info msg="Ensure that sandbox 6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077 in task-service has been cleanup successfully" Dec 13 01:56:17.249722 kubelet[3281]: I1213 01:56:17.245718 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:17.249890 containerd[2044]: time="2024-12-13T01:56:17.248261194Z" level=info msg="StopPodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\"" Dec 13 01:56:17.249890 containerd[2044]: time="2024-12-13T01:56:17.248529526Z" level=info msg="Ensure that sandbox 9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348 in task-service has been cleanup successfully" Dec 13 01:56:17.259990 kubelet[3281]: I1213 01:56:17.259953 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:17.263681 containerd[2044]: time="2024-12-13T01:56:17.263228602Z" level=info msg="StopPodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\"" Dec 13 01:56:17.268762 containerd[2044]: time="2024-12-13T01:56:17.268594306Z" level=info msg="Ensure that sandbox 6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2 in task-service has been cleanup successfully" Dec 13 01:56:17.429866 containerd[2044]: time="2024-12-13T01:56:17.429707123Z" level=error msg="StopPodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" failed" error="failed to destroy network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.431214 kubelet[3281]: E1213 01:56:17.430824 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:17.431743 kubelet[3281]: E1213 01:56:17.431632 3281 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348"} Dec 13 01:56:17.432036 kubelet[3281]: E1213 01:56:17.432003 3281 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9859ed10-6294-4331-aad0-3ead71dc6b50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:17.432275 kubelet[3281]: E1213 01:56:17.432216 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9859ed10-6294-4331-aad0-3ead71dc6b50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" podUID="9859ed10-6294-4331-aad0-3ead71dc6b50" Dec 13 01:56:17.461242 containerd[2044]: time="2024-12-13T01:56:17.461115119Z" level=error msg="StopPodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" failed" error="failed to destroy network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.461980 kubelet[3281]: E1213 01:56:17.461926 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:17.462358 kubelet[3281]: E1213 01:56:17.462161 3281 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077"} Dec 13 01:56:17.462358 kubelet[3281]: E1213 01:56:17.462235 3281 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:17.462358 kubelet[3281]: E1213 01:56:17.462288 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8ld8" podUID="fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6" Dec 13 01:56:17.489614 containerd[2044]: time="2024-12-13T01:56:17.488790923Z" level=error msg="StopPodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" failed" error="failed to destroy network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.490222 kubelet[3281]: E1213 01:56:17.490000 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:17.490222 kubelet[3281]: E1213 01:56:17.490071 3281 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2"} Dec 13 01:56:17.490222 kubelet[3281]: E1213 01:56:17.490126 3281 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"939dd521-1757-4ed9-83b7-813ae796a6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:17.490222 kubelet[3281]: E1213 01:56:17.490172 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"939dd521-1757-4ed9-83b7-813ae796a6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" podUID="939dd521-1757-4ed9-83b7-813ae796a6af" Dec 13 01:56:17.506727 containerd[2044]: time="2024-12-13T01:56:17.506104931Z" level=error msg="Failed to destroy network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.506854 containerd[2044]: time="2024-12-13T01:56:17.506744387Z" level=error msg="encountered an error cleaning up failed sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.506854 containerd[2044]: time="2024-12-13T01:56:17.506827403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxrmk,Uid:92de6f5a-467f-4f9f-aa0a-c0c83f71a31b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.508574 kubelet[3281]: E1213 01:56:17.507195 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.508574 kubelet[3281]: E1213 01:56:17.507271 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxrmk" Dec 13 01:56:17.508574 kubelet[3281]: E1213 01:56:17.507302 3281 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nxrmk" Dec 13 01:56:17.508924 kubelet[3281]: E1213 01:56:17.507360 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxrmk_kube-system(92de6f5a-467f-4f9f-aa0a-c0c83f71a31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxrmk_kube-system(92de6f5a-467f-4f9f-aa0a-c0c83f71a31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxrmk" podUID="92de6f5a-467f-4f9f-aa0a-c0c83f71a31b" Dec 13 01:56:17.554130 containerd[2044]: time="2024-12-13T01:56:17.554003976Z" level=error msg="Failed to destroy network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.555337 containerd[2044]: time="2024-12-13T01:56:17.554766156Z" level=error msg="encountered an error cleaning up failed sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.555337 containerd[2044]: time="2024-12-13T01:56:17.554872332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tw7vs,Uid:8404c66f-b027-4878-814b-c22b0f9622a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.557171 kubelet[3281]: E1213 01:56:17.555318 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:17.557171 kubelet[3281]: E1213 01:56:17.555392 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tw7vs" Dec 13 01:56:17.557171 kubelet[3281]: E1213 01:56:17.555443 3281 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tw7vs" Dec 13 01:56:17.557396 kubelet[3281]: E1213 01:56:17.555535 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tw7vs_kube-system(8404c66f-b027-4878-814b-c22b0f9622a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tw7vs_kube-system(8404c66f-b027-4878-814b-c22b0f9622a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tw7vs" podUID="8404c66f-b027-4878-814b-c22b0f9622a6" Dec 13 01:56:18.142321 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0-shm.mount: Deactivated successfully. Dec 13 01:56:18.142515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645-shm.mount: Deactivated successfully. Dec 13 01:56:18.271203 kubelet[3281]: I1213 01:56:18.270245 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:18.273940 containerd[2044]: time="2024-12-13T01:56:18.273871655Z" level=info msg="StopPodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\"" Dec 13 01:56:18.274497 containerd[2044]: time="2024-12-13T01:56:18.274206359Z" level=info msg="Ensure that sandbox 98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645 in task-service has been cleanup successfully" Dec 13 01:56:18.282455 kubelet[3281]: I1213 01:56:18.282339 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:18.284109 containerd[2044]: time="2024-12-13T01:56:18.283938599Z" level=info msg="StopPodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\"" Dec 13 01:56:18.284411 containerd[2044]: time="2024-12-13T01:56:18.284298023Z" level=info msg="Ensure that sandbox f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0 in task-service has been cleanup successfully" Dec 13 01:56:18.370827 containerd[2044]: time="2024-12-13T01:56:18.370307220Z" level=error msg="StopPodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" failed" error="failed to destroy network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:18.371048 kubelet[3281]: E1213 01:56:18.370600 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:18.371048 kubelet[3281]: E1213 01:56:18.370840 3281 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645"} Dec 13 01:56:18.371048 kubelet[3281]: E1213 01:56:18.370931 3281 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:18.371048 kubelet[3281]: E1213 01:56:18.370979 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nxrmk" podUID="92de6f5a-467f-4f9f-aa0a-c0c83f71a31b" Dec 13 01:56:18.388803 containerd[2044]: time="2024-12-13T01:56:18.388718220Z" level=error msg="StopPodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" failed" error="failed to destroy network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:56:18.389078 kubelet[3281]: E1213 01:56:18.389009 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:18.389186 kubelet[3281]: E1213 01:56:18.389088 3281 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0"} Dec 13 01:56:18.389186 kubelet[3281]: E1213 01:56:18.389144 3281 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8404c66f-b027-4878-814b-c22b0f9622a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:56:18.389345 kubelet[3281]: E1213 01:56:18.389183 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8404c66f-b027-4878-814b-c22b0f9622a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tw7vs" podUID="8404c66f-b027-4878-814b-c22b0f9622a6" Dec 13 01:56:19.888961 systemd[1]: Started sshd@7-172.31.19.88:22-139.178.68.195:35654.service - OpenSSH per-connection server daemon (139.178.68.195:35654). Dec 13 01:56:20.104079 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 35654 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:20.109396 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:20.121097 systemd-logind[2008]: New session 8 of user core. Dec 13 01:56:20.130001 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:56:20.457873 sshd[4414]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:20.472705 systemd[1]: sshd@7-172.31.19.88:22-139.178.68.195:35654.service: Deactivated successfully. Dec 13 01:56:20.478928 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:56:20.483162 systemd-logind[2008]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:56:20.486852 systemd-logind[2008]: Removed session 8. Dec 13 01:56:23.538283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280090123.mount: Deactivated successfully. Dec 13 01:56:23.639974 containerd[2044]: time="2024-12-13T01:56:23.639887106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:23.642464 containerd[2044]: time="2024-12-13T01:56:23.642367806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:56:23.645508 containerd[2044]: time="2024-12-13T01:56:23.645377010Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:23.653058 containerd[2044]: time="2024-12-13T01:56:23.652771902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:23.654698 containerd[2044]: time="2024-12-13T01:56:23.654415698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.436557297s" Dec 13 01:56:23.654698 containerd[2044]: time="2024-12-13T01:56:23.654476226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:56:23.698624 containerd[2044]: time="2024-12-13T01:56:23.698484162Z" level=info msg="CreateContainer within sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:56:23.749267 containerd[2044]: time="2024-12-13T01:56:23.749136031Z" level=info msg="CreateContainer within sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\"" Dec 13 01:56:23.751198 containerd[2044]: time="2024-12-13T01:56:23.750984391Z" level=info msg="StartContainer for \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\"" Dec 13 01:56:23.823044 systemd[1]: Started cri-containerd-b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00.scope - libcontainer container b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00. Dec 13 01:56:23.899425 containerd[2044]: time="2024-12-13T01:56:23.899298883Z" level=info msg="StartContainer for \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\" returns successfully" Dec 13 01:56:24.037116 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:56:24.037254 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:56:25.502134 systemd[1]: Started sshd@8-172.31.19.88:22-139.178.68.195:35666.service - OpenSSH per-connection server daemon (139.178.68.195:35666). Dec 13 01:56:25.697438 sshd[4541]: Accepted publickey for core from 139.178.68.195 port 35666 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:25.700873 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:25.716101 systemd-logind[2008]: New session 9 of user core. Dec 13 01:56:25.744314 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:56:26.147018 sshd[4541]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:26.156871 systemd[1]: sshd@8-172.31.19.88:22-139.178.68.195:35666.service: Deactivated successfully. Dec 13 01:56:26.163447 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:56:26.169566 systemd-logind[2008]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:56:26.172824 systemd-logind[2008]: Removed session 9. Dec 13 01:56:27.164465 kubelet[3281]: I1213 01:56:27.163987 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:27.189147 kubelet[3281]: I1213 01:56:27.187449 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jlcvd" podStartSLOduration=4.671290865 podStartE2EDuration="22.187425932s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="2024-12-13 01:56:06.140841095 +0000 UTC m=+23.410896261" lastFinishedPulling="2024-12-13 01:56:23.656976174 +0000 UTC m=+40.927031328" observedRunningTime="2024-12-13 01:56:24.376703442 +0000 UTC m=+41.646758800" watchObservedRunningTime="2024-12-13 01:56:27.187425932 +0000 UTC m=+44.457481098" Dec 13 01:56:27.810677 kernel: bpftool[4705]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:56:28.115257 systemd-networkd[1926]: vxlan.calico: Link UP Dec 13 01:56:28.115275 systemd-networkd[1926]: vxlan.calico: Gained carrier Dec 13 01:56:28.125122 (udev-worker)[4726]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:28.169147 (udev-worker)[4725]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:29.859863 systemd-networkd[1926]: vxlan.calico: Gained IPv6LL Dec 13 01:56:29.978370 containerd[2044]: time="2024-12-13T01:56:29.978315925Z" level=info msg="StopPodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\"" Dec 13 01:56:29.980365 containerd[2044]: time="2024-12-13T01:56:29.979267657Z" level=info msg="StopPodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\"" Dec 13 01:56:29.992786 containerd[2044]: time="2024-12-13T01:56:29.979371877Z" level=info msg="StopPodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\"" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.197 [INFO][4821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.199 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" iface="eth0" netns="/var/run/netns/cni-b076b888-ccf1-9863-2df7-49c875d7e023" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.202 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" iface="eth0" netns="/var/run/netns/cni-b076b888-ccf1-9863-2df7-49c875d7e023" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.206 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" iface="eth0" netns="/var/run/netns/cni-b076b888-ccf1-9863-2df7-49c875d7e023" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.206 [INFO][4821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.207 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.283 [INFO][4840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.283 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.283 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.310 [WARNING][4840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.310 [INFO][4840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.313 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:30.321233 containerd[2044]: 2024-12-13 01:56:30.319 [INFO][4821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:30.322801 containerd[2044]: time="2024-12-13T01:56:30.322424951Z" level=info msg="TearDown network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" successfully" Dec 13 01:56:30.322801 containerd[2044]: time="2024-12-13T01:56:30.322472495Z" level=info msg="StopPodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" returns successfully" Dec 13 01:56:30.333251 containerd[2044]: time="2024-12-13T01:56:30.332009303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tw7vs,Uid:8404c66f-b027-4878-814b-c22b0f9622a6,Namespace:kube-system,Attempt:1,}" Dec 13 01:56:30.332933 systemd[1]: run-netns-cni\x2db076b888\x2dccf1\x2d9863\x2d2df7\x2d49c875d7e023.mount: Deactivated successfully. Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.191 [INFO][4822] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.192 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" iface="eth0" netns="/var/run/netns/cni-255c949d-1573-ec12-20c8-2ef22af68a70" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.194 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" iface="eth0" netns="/var/run/netns/cni-255c949d-1573-ec12-20c8-2ef22af68a70" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.197 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" iface="eth0" netns="/var/run/netns/cni-255c949d-1573-ec12-20c8-2ef22af68a70" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.197 [INFO][4822] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.198 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.292 [INFO][4839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.293 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.313 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.337 [WARNING][4839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.337 [INFO][4839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.342 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:30.363683 containerd[2044]: 2024-12-13 01:56:30.346 [INFO][4822] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:30.363683 containerd[2044]: time="2024-12-13T01:56:30.357339239Z" level=info msg="TearDown network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" successfully" Dec 13 01:56:30.363683 containerd[2044]: time="2024-12-13T01:56:30.357377147Z" level=info msg="StopPodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" returns successfully" Dec 13 01:56:30.363683 containerd[2044]: time="2024-12-13T01:56:30.360001799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8ld8,Uid:fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6,Namespace:calico-system,Attempt:1,}" Dec 13 01:56:30.384060 systemd[1]: run-netns-cni\x2d255c949d\x2d1573\x2dec12\x2d20c8\x2d2ef22af68a70.mount: Deactivated successfully. Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.225 [INFO][4823] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.226 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" iface="eth0" netns="/var/run/netns/cni-b50ba7c2-1de3-e998-609e-1b8f93854f99" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.228 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" iface="eth0" netns="/var/run/netns/cni-b50ba7c2-1de3-e998-609e-1b8f93854f99" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.230 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" iface="eth0" netns="/var/run/netns/cni-b50ba7c2-1de3-e998-609e-1b8f93854f99" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.230 [INFO][4823] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.230 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.312 [INFO][4847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.312 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.342 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.379 [WARNING][4847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.379 [INFO][4847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.385 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:30.397127 containerd[2044]: 2024-12-13 01:56:30.393 [INFO][4823] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:30.398488 containerd[2044]: time="2024-12-13T01:56:30.398441316Z" level=info msg="TearDown network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" successfully" Dec 13 01:56:30.398605 containerd[2044]: time="2024-12-13T01:56:30.398576124Z" level=info msg="StopPodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" returns successfully" Dec 13 01:56:30.400056 containerd[2044]: time="2024-12-13T01:56:30.400003908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-pchds,Uid:939dd521-1757-4ed9-83b7-813ae796a6af,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:56:30.407866 systemd[1]: run-netns-cni\x2db50ba7c2\x2d1de3\x2de998\x2d609e\x2d1b8f93854f99.mount: Deactivated successfully. Dec 13 01:56:30.782210 systemd-networkd[1926]: cali633377fa96c: Link UP Dec 13 01:56:30.782739 systemd-networkd[1926]: cali633377fa96c: Gained carrier Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.526 [INFO][4871] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0 csi-node-driver- calico-system fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6 886 0 2024-12-13 01:56:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-88 csi-node-driver-v8ld8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali633377fa96c [] []}} ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.526 [INFO][4871] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.643 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" HandleID="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.686 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" HandleID="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-88", "pod":"csi-node-driver-v8ld8", "timestamp":"2024-12-13 01:56:30.643710937 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.686 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.686 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.686 [INFO][4900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.692 [INFO][4900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.707 [INFO][4900] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.719 [INFO][4900] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.723 [INFO][4900] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.735 [INFO][4900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.735 [INFO][4900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.742 [INFO][4900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807 Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.752 [INFO][4900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.765 [INFO][4900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.1/26] block=192.168.81.0/26 handle="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.765 [INFO][4900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.1/26] handle="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" host="ip-172-31-19-88" Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.766 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:30.831169 containerd[2044]: 2024-12-13 01:56:30.766 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.1/26] IPv6=[] ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" HandleID="k8s-pod-network.641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.833756 containerd[2044]: 2024-12-13 01:56:30.772 [INFO][4871] cni-plugin/k8s.go 386: Populated endpoint ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"csi-node-driver-v8ld8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali633377fa96c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:30.833756 containerd[2044]: 2024-12-13 01:56:30.772 [INFO][4871] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.1/32] ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.833756 containerd[2044]: 2024-12-13 01:56:30.772 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali633377fa96c ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.833756 containerd[2044]: 2024-12-13 01:56:30.783 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.833756 containerd[2044]: 2024-12-13 01:56:30.791 [INFO][4871] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807", Pod:"csi-node-driver-v8ld8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali633377fa96c", MAC:"6e:c6:45:b5:85:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:30.833756 containerd[2044]: 2024-12-13 01:56:30.824 [INFO][4871] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807" Namespace="calico-system" Pod="csi-node-driver-v8ld8" WorkloadEndpoint="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:30.907682 containerd[2044]: time="2024-12-13T01:56:30.907347938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:30.907682 containerd[2044]: time="2024-12-13T01:56:30.907456526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:30.907682 containerd[2044]: time="2024-12-13T01:56:30.907495346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:30.909612 containerd[2044]: time="2024-12-13T01:56:30.909098954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:30.958706 systemd[1]: Started cri-containerd-641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807.scope - libcontainer container 641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807. Dec 13 01:56:30.987666 containerd[2044]: time="2024-12-13T01:56:30.985926578Z" level=info msg="StopPodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\"" Dec 13 01:56:30.988476 systemd-networkd[1926]: califf4471e1f32: Link UP Dec 13 01:56:30.995697 systemd-networkd[1926]: califf4471e1f32: Gained carrier Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.522 [INFO][4861] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0 coredns-7db6d8ff4d- kube-system 8404c66f-b027-4878-814b-c22b0f9622a6 887 0 2024-12-13 01:55:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-88 coredns-7db6d8ff4d-tw7vs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf4471e1f32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.523 [INFO][4861] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.676 [INFO][4896] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" HandleID="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.717 [INFO][4896] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" HandleID="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031bf40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-88", "pod":"coredns-7db6d8ff4d-tw7vs", "timestamp":"2024-12-13 01:56:30.676759477 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.717 [INFO][4896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.766 [INFO][4896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.766 [INFO][4896] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.772 [INFO][4896] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.811 [INFO][4896] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.843 [INFO][4896] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.851 [INFO][4896] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.857 [INFO][4896] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.857 [INFO][4896] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.884 [INFO][4896] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432 Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.898 [INFO][4896] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.921 [INFO][4896] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.2/26] block=192.168.81.0/26 handle="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.921 [INFO][4896] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.2/26] handle="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" host="ip-172-31-19-88" Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.921 [INFO][4896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:31.050549 containerd[2044]: 2024-12-13 01:56:30.921 [INFO][4896] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.2/26] IPv6=[] ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" HandleID="k8s-pod-network.12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.053449 containerd[2044]: 2024-12-13 01:56:30.935 [INFO][4861] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8404c66f-b027-4878-814b-c22b0f9622a6", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"coredns-7db6d8ff4d-tw7vs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf4471e1f32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:31.053449 containerd[2044]: 2024-12-13 01:56:30.935 [INFO][4861] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.2/32] ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.053449 containerd[2044]: 2024-12-13 01:56:30.935 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf4471e1f32 ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.053449 containerd[2044]: 2024-12-13 01:56:30.998 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.053449 containerd[2044]: 2024-12-13 01:56:30.999 [INFO][4861] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8404c66f-b027-4878-814b-c22b0f9622a6", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432", Pod:"coredns-7db6d8ff4d-tw7vs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf4471e1f32", MAC:"62:0c:7e:95:9c:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:31.053449 containerd[2044]: 2024-12-13 01:56:31.037 [INFO][4861] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tw7vs" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:31.167023 systemd-networkd[1926]: calibc650ef2424: Link UP Dec 13 01:56:31.168491 systemd-networkd[1926]: calibc650ef2424: Gained carrier Dec 13 01:56:31.194259 containerd[2044]: time="2024-12-13T01:56:31.191930663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:31.194259 containerd[2044]: time="2024-12-13T01:56:31.192043175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:31.194259 containerd[2044]: time="2024-12-13T01:56:31.192079931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.194259 containerd[2044]: time="2024-12-13T01:56:31.192239327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.204271 systemd[1]: Started sshd@9-172.31.19.88:22-139.178.68.195:53328.service - OpenSSH per-connection server daemon (139.178.68.195:53328). Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.586 [INFO][4880] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0 calico-apiserver-f77bf5bb4- calico-apiserver 939dd521-1757-4ed9-83b7-813ae796a6af 888 0 2024-12-13 01:56:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f77bf5bb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-88 calico-apiserver-f77bf5bb4-pchds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc650ef2424 [] []}} ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.587 [INFO][4880] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.719 [INFO][4906] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" HandleID="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.747 [INFO][4906] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" HandleID="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000386d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-88", "pod":"calico-apiserver-f77bf5bb4-pchds", "timestamp":"2024-12-13 01:56:30.719407321 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.747 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.922 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.925 [INFO][4906] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.941 [INFO][4906] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:30.984 [INFO][4906] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.026 [INFO][4906] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.041 [INFO][4906] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.066 [INFO][4906] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.068 [INFO][4906] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.077 [INFO][4906] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.108 [INFO][4906] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.137 [INFO][4906] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.3/26] block=192.168.81.0/26 handle="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.137 [INFO][4906] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.3/26] handle="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" host="ip-172-31-19-88" Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.137 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:31.241604 containerd[2044]: 2024-12-13 01:56:31.137 [INFO][4906] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.3/26] IPv6=[] ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" HandleID="k8s-pod-network.1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.247451 containerd[2044]: 2024-12-13 01:56:31.152 [INFO][4880] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"939dd521-1757-4ed9-83b7-813ae796a6af", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"calico-apiserver-f77bf5bb4-pchds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc650ef2424", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:31.247451 containerd[2044]: 2024-12-13 01:56:31.153 [INFO][4880] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.3/32] ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.247451 containerd[2044]: 2024-12-13 01:56:31.154 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc650ef2424 ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.247451 containerd[2044]: 2024-12-13 01:56:31.170 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.247451 containerd[2044]: 2024-12-13 01:56:31.172 [INFO][4880] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"939dd521-1757-4ed9-83b7-813ae796a6af", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c", Pod:"calico-apiserver-f77bf5bb4-pchds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc650ef2424", MAC:"66:03:11:fe:c4:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:31.247451 containerd[2044]: 2024-12-13 01:56:31.234 [INFO][4880] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-pchds" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:31.259035 systemd[1]: Started cri-containerd-12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432.scope - libcontainer container 12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432. Dec 13 01:56:31.389830 containerd[2044]: time="2024-12-13T01:56:31.389743512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8ld8,Uid:fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807\"" Dec 13 01:56:31.396997 containerd[2044]: time="2024-12-13T01:56:31.396499452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:56:31.461148 containerd[2044]: time="2024-12-13T01:56:31.460876309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:31.461148 containerd[2044]: time="2024-12-13T01:56:31.460984165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:31.461148 containerd[2044]: time="2024-12-13T01:56:31.461049901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.463007 containerd[2044]: time="2024-12-13T01:56:31.461217325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.484211 sshd[5016]: Accepted publickey for core from 139.178.68.195 port 53328 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:31.496021 sshd[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:31.498056 containerd[2044]: time="2024-12-13T01:56:31.496855921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tw7vs,Uid:8404c66f-b027-4878-814b-c22b0f9622a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432\"" Dec 13 01:56:31.530716 systemd-logind[2008]: New session 10 of user core. Dec 13 01:56:31.538392 containerd[2044]: time="2024-12-13T01:56:31.537479641Z" level=info msg="CreateContainer within sandbox \"12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:56:31.541194 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:56:31.560022 systemd[1]: Started cri-containerd-1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c.scope - libcontainer container 1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c. Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.245 [INFO][4979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.246 [INFO][4979] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" iface="eth0" netns="/var/run/netns/cni-9060e8a6-bb74-9dfc-08b6-41f2a67687c2" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.246 [INFO][4979] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" iface="eth0" netns="/var/run/netns/cni-9060e8a6-bb74-9dfc-08b6-41f2a67687c2" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.249 [INFO][4979] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" iface="eth0" netns="/var/run/netns/cni-9060e8a6-bb74-9dfc-08b6-41f2a67687c2" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.249 [INFO][4979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.250 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.466 [INFO][5032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.467 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.467 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.517 [WARNING][5032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.517 [INFO][5032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.534 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:31.561466 containerd[2044]: 2024-12-13 01:56:31.554 [INFO][4979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:31.567530 containerd[2044]: time="2024-12-13T01:56:31.567411757Z" level=info msg="TearDown network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" successfully" Dec 13 01:56:31.569005 systemd[1]: run-netns-cni\x2d9060e8a6\x2dbb74\x2d9dfc\x2d08b6\x2d41f2a67687c2.mount: Deactivated successfully. Dec 13 01:56:31.570663 containerd[2044]: time="2024-12-13T01:56:31.569208649Z" level=info msg="StopPodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" returns successfully" Dec 13 01:56:31.572893 containerd[2044]: time="2024-12-13T01:56:31.572809753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-59lz9,Uid:9859ed10-6294-4331-aad0-3ead71dc6b50,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:56:31.612911 containerd[2044]: time="2024-12-13T01:56:31.612688766Z" level=info msg="CreateContainer within sandbox \"12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"014d33d0f41e992c4f407592c64e13e8e8498b72ba2592b8b0f04a87fb594e9e\"" Dec 13 01:56:31.616330 containerd[2044]: time="2024-12-13T01:56:31.615931010Z" level=info msg="StartContainer for \"014d33d0f41e992c4f407592c64e13e8e8498b72ba2592b8b0f04a87fb594e9e\"" Dec 13 01:56:31.791380 systemd[1]: Started cri-containerd-014d33d0f41e992c4f407592c64e13e8e8498b72ba2592b8b0f04a87fb594e9e.scope - libcontainer container 014d33d0f41e992c4f407592c64e13e8e8498b72ba2592b8b0f04a87fb594e9e. Dec 13 01:56:31.815050 containerd[2044]: time="2024-12-13T01:56:31.814555119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-pchds,Uid:939dd521-1757-4ed9-83b7-813ae796a6af,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c\"" Dec 13 01:56:31.906691 containerd[2044]: time="2024-12-13T01:56:31.906139275Z" level=info msg="StartContainer for \"014d33d0f41e992c4f407592c64e13e8e8498b72ba2592b8b0f04a87fb594e9e\" returns successfully" Dec 13 01:56:31.982882 containerd[2044]: time="2024-12-13T01:56:31.982784115Z" level=info msg="StopPodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\"" Dec 13 01:56:31.984195 sshd[5016]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:31.988063 containerd[2044]: time="2024-12-13T01:56:31.986113551Z" level=info msg="StopPodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\"" Dec 13 01:56:31.995535 systemd[1]: sshd@9-172.31.19.88:22-139.178.68.195:53328.service: Deactivated successfully. Dec 13 01:56:32.005365 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:56:32.009234 systemd-logind[2008]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:56:32.045218 systemd[1]: Started sshd@10-172.31.19.88:22-139.178.68.195:53338.service - OpenSSH per-connection server daemon (139.178.68.195:53338). Dec 13 01:56:32.050359 systemd-logind[2008]: Removed session 10. Dec 13 01:56:32.226908 systemd-networkd[1926]: cali633377fa96c: Gained IPv6LL Dec 13 01:56:32.275143 systemd-networkd[1926]: cali7e037259348: Link UP Dec 13 01:56:32.275614 systemd-networkd[1926]: cali7e037259348: Gained carrier Dec 13 01:56:32.305754 sshd[5206]: Accepted publickey for core from 139.178.68.195 port 53338 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:32.315417 sshd[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:32.348507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868483060.mount: Deactivated successfully. Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:31.875 [INFO][5108] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0 calico-apiserver-f77bf5bb4- calico-apiserver 9859ed10-6294-4331-aad0-3ead71dc6b50 906 0 2024-12-13 01:56:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f77bf5bb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-88 calico-apiserver-f77bf5bb4-59lz9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e037259348 [] []}} ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:31.875 [INFO][5108] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.036 [INFO][5171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" HandleID="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.140 [INFO][5171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" HandleID="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038fca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-88", "pod":"calico-apiserver-f77bf5bb4-59lz9", "timestamp":"2024-12-13 01:56:32.036186984 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.140 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.141 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.141 [INFO][5171] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.144 [INFO][5171] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.154 [INFO][5171] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.177 [INFO][5171] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.184 [INFO][5171] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.191 [INFO][5171] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.191 [INFO][5171] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.197 [INFO][5171] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.213 [INFO][5171] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.244 [INFO][5171] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.4/26] block=192.168.81.0/26 handle="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.244 [INFO][5171] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.4/26] handle="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" host="ip-172-31-19-88" Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.244 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:32.349874 containerd[2044]: 2024-12-13 01:56:32.244 [INFO][5171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.4/26] IPv6=[] ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" HandleID="k8s-pod-network.24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.352464 containerd[2044]: 2024-12-13 01:56:32.261 [INFO][5108] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9859ed10-6294-4331-aad0-3ead71dc6b50", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"calico-apiserver-f77bf5bb4-59lz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e037259348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:32.352464 containerd[2044]: 2024-12-13 01:56:32.263 [INFO][5108] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.4/32] ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.352464 containerd[2044]: 2024-12-13 01:56:32.263 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e037259348 ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.352464 containerd[2044]: 2024-12-13 01:56:32.275 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.352464 containerd[2044]: 2024-12-13 01:56:32.283 [INFO][5108] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9859ed10-6294-4331-aad0-3ead71dc6b50", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac", Pod:"calico-apiserver-f77bf5bb4-59lz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e037259348", MAC:"ba:71:6f:d3:bc:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:32.352464 containerd[2044]: 2024-12-13 01:56:32.336 [INFO][5108] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac" Namespace="calico-apiserver" Pod="calico-apiserver-f77bf5bb4-59lz9" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:32.365044 systemd-logind[2008]: New session 11 of user core. Dec 13 01:56:32.375512 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:56:32.451873 kubelet[3281]: I1213 01:56:32.451628 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tw7vs" podStartSLOduration=37.451404638 podStartE2EDuration="37.451404638s" podCreationTimestamp="2024-12-13 01:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:32.45085355 +0000 UTC m=+49.720908716" watchObservedRunningTime="2024-12-13 01:56:32.451404638 +0000 UTC m=+49.721459804" Dec 13 01:56:32.483018 systemd-networkd[1926]: califf4471e1f32: Gained IPv6LL Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.252 [INFO][5211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.252 [INFO][5211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" iface="eth0" netns="/var/run/netns/cni-3ad57c35-ad8a-3a0e-4350-00a6dce97d40" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.254 [INFO][5211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" iface="eth0" netns="/var/run/netns/cni-3ad57c35-ad8a-3a0e-4350-00a6dce97d40" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.254 [INFO][5211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" iface="eth0" netns="/var/run/netns/cni-3ad57c35-ad8a-3a0e-4350-00a6dce97d40" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.254 [INFO][5211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.254 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.444 [INFO][5226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.444 [INFO][5226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.444 [INFO][5226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.491 [WARNING][5226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.491 [INFO][5226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.500 [INFO][5226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:32.528099 containerd[2044]: 2024-12-13 01:56:32.516 [INFO][5211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:32.550988 systemd[1]: run-netns-cni\x2d3ad57c35\x2dad8a\x2d3a0e\x2d4350\x2d00a6dce97d40.mount: Deactivated successfully. Dec 13 01:56:32.564672 containerd[2044]: time="2024-12-13T01:56:32.562868246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:32.564672 containerd[2044]: time="2024-12-13T01:56:32.562966634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:32.564672 containerd[2044]: time="2024-12-13T01:56:32.562995098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:32.564672 containerd[2044]: time="2024-12-13T01:56:32.563185766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:32.619771 containerd[2044]: time="2024-12-13T01:56:32.617031963Z" level=info msg="TearDown network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" successfully" Dec 13 01:56:32.619771 containerd[2044]: time="2024-12-13T01:56:32.617127975Z" level=info msg="StopPodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" returns successfully" Dec 13 01:56:32.623105 containerd[2044]: time="2024-12-13T01:56:32.620585991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc9599d8-srdrd,Uid:1fd116f0-a6fd-4513-9f18-4fa2b846559e,Namespace:calico-system,Attempt:1,}" Dec 13 01:56:32.660241 systemd[1]: Started cri-containerd-24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac.scope - libcontainer container 24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac. Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.253 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.253 [INFO][5202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" iface="eth0" netns="/var/run/netns/cni-506cbb85-4f9c-c649-b386-74948aa629dd" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.255 [INFO][5202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" iface="eth0" netns="/var/run/netns/cni-506cbb85-4f9c-c649-b386-74948aa629dd" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.260 [INFO][5202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" iface="eth0" netns="/var/run/netns/cni-506cbb85-4f9c-c649-b386-74948aa629dd" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.261 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.261 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.512 [INFO][5230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.516 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.517 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.608 [WARNING][5230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.608 [INFO][5230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.616 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:32.680044 containerd[2044]: 2024-12-13 01:56:32.633 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:32.679830 systemd-networkd[1926]: calibc650ef2424: Gained IPv6LL Dec 13 01:56:32.697669 containerd[2044]: time="2024-12-13T01:56:32.694836267Z" level=info msg="TearDown network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" successfully" Dec 13 01:56:32.697669 containerd[2044]: time="2024-12-13T01:56:32.694974015Z" level=info msg="StopPodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" returns successfully" Dec 13 01:56:32.701981 systemd[1]: run-netns-cni\x2d506cbb85\x2d4f9c\x2dc649\x2db386\x2d74948aa629dd.mount: Deactivated successfully. Dec 13 01:56:32.709879 containerd[2044]: time="2024-12-13T01:56:32.705538359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxrmk,Uid:92de6f5a-467f-4f9f-aa0a-c0c83f71a31b,Namespace:kube-system,Attempt:1,}" Dec 13 01:56:32.988216 sshd[5206]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:32.999915 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:56:33.004333 systemd[1]: sshd@10-172.31.19.88:22-139.178.68.195:53338.service: Deactivated successfully. Dec 13 01:56:33.016762 systemd-logind[2008]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:56:33.050922 systemd[1]: Started sshd@11-172.31.19.88:22-139.178.68.195:53354.service - OpenSSH per-connection server daemon (139.178.68.195:53354). Dec 13 01:56:33.057337 systemd-logind[2008]: Removed session 11. Dec 13 01:56:33.178625 containerd[2044]: time="2024-12-13T01:56:33.178406581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f77bf5bb4-59lz9,Uid:9859ed10-6294-4331-aad0-3ead71dc6b50,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac\"" Dec 13 01:56:33.336148 sshd[5323]: Accepted publickey for core from 139.178.68.195 port 53354 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:33.351723 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:33.370411 systemd-logind[2008]: New session 12 of user core. Dec 13 01:56:33.378006 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:56:33.549335 systemd-networkd[1926]: calie6ff55a407b: Link UP Dec 13 01:56:33.552459 systemd-networkd[1926]: calie6ff55a407b: Gained carrier Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.147 [INFO][5289] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0 calico-kube-controllers-56cc9599d8- calico-system 1fd116f0-a6fd-4513-9f18-4fa2b846559e 917 0 2024-12-13 01:56:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56cc9599d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-88 calico-kube-controllers-56cc9599d8-srdrd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie6ff55a407b [] []}} ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.147 [INFO][5289] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.318 [INFO][5337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.381 [INFO][5337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a6840), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-88", "pod":"calico-kube-controllers-56cc9599d8-srdrd", "timestamp":"2024-12-13 01:56:33.318041342 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.381 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.381 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.381 [INFO][5337] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.392 [INFO][5337] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.406 [INFO][5337] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.422 [INFO][5337] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.430 [INFO][5337] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.442 [INFO][5337] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.442 [INFO][5337] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.448 [INFO][5337] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6 Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.465 [INFO][5337] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.492 [INFO][5337] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.5/26] block=192.168.81.0/26 handle="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.492 [INFO][5337] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.5/26] handle="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" host="ip-172-31-19-88" Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.492 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:33.635839 containerd[2044]: 2024-12-13 01:56:33.492 [INFO][5337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.5/26] IPv6=[] ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.637065 containerd[2044]: 2024-12-13 01:56:33.524 [INFO][5289] cni-plugin/k8s.go 386: Populated endpoint ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0", GenerateName:"calico-kube-controllers-56cc9599d8-", Namespace:"calico-system", SelfLink:"", UID:"1fd116f0-a6fd-4513-9f18-4fa2b846559e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc9599d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"calico-kube-controllers-56cc9599d8-srdrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6ff55a407b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:33.637065 containerd[2044]: 2024-12-13 01:56:33.524 [INFO][5289] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.5/32] ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.637065 containerd[2044]: 2024-12-13 01:56:33.524 [INFO][5289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6ff55a407b ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.637065 containerd[2044]: 2024-12-13 01:56:33.556 [INFO][5289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.637065 containerd[2044]: 2024-12-13 01:56:33.557 [INFO][5289] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0", GenerateName:"calico-kube-controllers-56cc9599d8-", Namespace:"calico-system", SelfLink:"", UID:"1fd116f0-a6fd-4513-9f18-4fa2b846559e", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc9599d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6", Pod:"calico-kube-controllers-56cc9599d8-srdrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6ff55a407b", MAC:"fe:84:4a:8a:72:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:33.637065 containerd[2044]: 2024-12-13 01:56:33.601 [INFO][5289] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Namespace="calico-system" Pod="calico-kube-controllers-56cc9599d8-srdrd" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:33.771494 containerd[2044]: time="2024-12-13T01:56:33.767686384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:33.771494 containerd[2044]: time="2024-12-13T01:56:33.767831464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:33.771494 containerd[2044]: time="2024-12-13T01:56:33.767869180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:33.771494 containerd[2044]: time="2024-12-13T01:56:33.768111652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:33.866277 sshd[5323]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:33.876011 systemd[1]: run-containerd-runc-k8s.io-880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6-runc.UcaZZj.mount: Deactivated successfully. Dec 13 01:56:33.892546 systemd-networkd[1926]: cali6825cbc9625: Link UP Dec 13 01:56:33.903041 systemd[1]: Started cri-containerd-880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6.scope - libcontainer container 880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6. Dec 13 01:56:33.918124 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:56:33.925567 systemd-networkd[1926]: cali6825cbc9625: Gained carrier Dec 13 01:56:33.926457 systemd[1]: sshd@11-172.31.19.88:22-139.178.68.195:53354.service: Deactivated successfully. Dec 13 01:56:33.944920 systemd-logind[2008]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:56:33.958177 systemd-logind[2008]: Removed session 12. Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.174 [INFO][5293] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0 coredns-7db6d8ff4d- kube-system 92de6f5a-467f-4f9f-aa0a-c0c83f71a31b 918 0 2024-12-13 01:55:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-88 coredns-7db6d8ff4d-nxrmk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6825cbc9625 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.175 [INFO][5293] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.398 [INFO][5341] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" HandleID="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.444 [INFO][5341] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" HandleID="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317c90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-88", "pod":"coredns-7db6d8ff4d-nxrmk", "timestamp":"2024-12-13 01:56:33.39845693 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.444 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.494 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.495 [INFO][5341] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.541 [INFO][5341] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.597 [INFO][5341] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.657 [INFO][5341] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.677 [INFO][5341] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.697 [INFO][5341] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.697 [INFO][5341] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.709 [INFO][5341] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434 Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.733 [INFO][5341] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.767 [INFO][5341] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.6/26] block=192.168.81.0/26 handle="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.776 [INFO][5341] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.6/26] handle="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" host="ip-172-31-19-88" Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.776 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:34.006793 containerd[2044]: 2024-12-13 01:56:33.776 [INFO][5341] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.6/26] IPv6=[] ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" HandleID="k8s-pod-network.dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.010094 containerd[2044]: 2024-12-13 01:56:33.822 [INFO][5293] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"coredns-7db6d8ff4d-nxrmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6825cbc9625", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:34.010094 containerd[2044]: 2024-12-13 01:56:33.822 [INFO][5293] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.6/32] ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.010094 containerd[2044]: 2024-12-13 01:56:33.822 [INFO][5293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6825cbc9625 ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.010094 containerd[2044]: 2024-12-13 01:56:33.957 [INFO][5293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.010094 containerd[2044]: 2024-12-13 01:56:33.957 [INFO][5293] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434", Pod:"coredns-7db6d8ff4d-nxrmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6825cbc9625", MAC:"d2:d6:a1:f2:ef:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:34.010094 containerd[2044]: 2024-12-13 01:56:33.993 [INFO][5293] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nxrmk" WorkloadEndpoint="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:34.010094 containerd[2044]: time="2024-12-13T01:56:34.009690169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:34.014043 containerd[2044]: time="2024-12-13T01:56:34.013786753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:56:34.018249 containerd[2044]: time="2024-12-13T01:56:34.017826674Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:34.027739 containerd[2044]: time="2024-12-13T01:56:34.027603122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:34.037978 containerd[2044]: time="2024-12-13T01:56:34.037807046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.640496958s" Dec 13 01:56:34.039401 containerd[2044]: time="2024-12-13T01:56:34.039334634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:56:34.045765 containerd[2044]: time="2024-12-13T01:56:34.045699758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:56:34.052196 containerd[2044]: time="2024-12-13T01:56:34.051766598Z" level=info msg="CreateContainer within sandbox \"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:56:34.114664 containerd[2044]: time="2024-12-13T01:56:34.113956430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:34.114664 containerd[2044]: time="2024-12-13T01:56:34.114044030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:34.114664 containerd[2044]: time="2024-12-13T01:56:34.114069782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:34.114664 containerd[2044]: time="2024-12-13T01:56:34.114206270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:34.116093 containerd[2044]: time="2024-12-13T01:56:34.115993154Z" level=info msg="CreateContainer within sandbox \"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5bb19e1b566531faaa45eaee40b0d16fa149e8f4b32dd40997f5b97f0a3e75bc\"" Dec 13 01:56:34.119849 containerd[2044]: time="2024-12-13T01:56:34.119787110Z" level=info msg="StartContainer for \"5bb19e1b566531faaa45eaee40b0d16fa149e8f4b32dd40997f5b97f0a3e75bc\"" Dec 13 01:56:34.128033 containerd[2044]: time="2024-12-13T01:56:34.127872842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc9599d8-srdrd,Uid:1fd116f0-a6fd-4513-9f18-4fa2b846559e,Namespace:calico-system,Attempt:1,} returns sandbox id \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\"" Dec 13 01:56:34.170971 systemd[1]: Started cri-containerd-dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434.scope - libcontainer container dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434. Dec 13 01:56:34.193039 systemd[1]: Started cri-containerd-5bb19e1b566531faaa45eaee40b0d16fa149e8f4b32dd40997f5b97f0a3e75bc.scope - libcontainer container 5bb19e1b566531faaa45eaee40b0d16fa149e8f4b32dd40997f5b97f0a3e75bc. Dec 13 01:56:34.210898 systemd-networkd[1926]: cali7e037259348: Gained IPv6LL Dec 13 01:56:34.276322 containerd[2044]: time="2024-12-13T01:56:34.276143475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxrmk,Uid:92de6f5a-467f-4f9f-aa0a-c0c83f71a31b,Namespace:kube-system,Attempt:1,} returns sandbox id \"dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434\"" Dec 13 01:56:34.285520 containerd[2044]: time="2024-12-13T01:56:34.285208503Z" level=info msg="CreateContainer within sandbox \"dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:56:34.301433 containerd[2044]: time="2024-12-13T01:56:34.301351215Z" level=info msg="StartContainer for \"5bb19e1b566531faaa45eaee40b0d16fa149e8f4b32dd40997f5b97f0a3e75bc\" returns successfully" Dec 13 01:56:34.323563 containerd[2044]: time="2024-12-13T01:56:34.323198355Z" level=info msg="CreateContainer within sandbox \"dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23a193d215fe751cf80d81c708f24ed1c5a90d95d712c5b396251dae8d69784d\"" Dec 13 01:56:34.327468 containerd[2044]: time="2024-12-13T01:56:34.327379167Z" level=info msg="StartContainer for \"23a193d215fe751cf80d81c708f24ed1c5a90d95d712c5b396251dae8d69784d\"" Dec 13 01:56:34.399498 systemd[1]: Started cri-containerd-23a193d215fe751cf80d81c708f24ed1c5a90d95d712c5b396251dae8d69784d.scope - libcontainer container 23a193d215fe751cf80d81c708f24ed1c5a90d95d712c5b396251dae8d69784d. Dec 13 01:56:34.531133 containerd[2044]: time="2024-12-13T01:56:34.531075112Z" level=info msg="StartContainer for \"23a193d215fe751cf80d81c708f24ed1c5a90d95d712c5b396251dae8d69784d\" returns successfully" Dec 13 01:56:35.497990 kubelet[3281]: I1213 01:56:35.497695 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nxrmk" podStartSLOduration=40.497628353 podStartE2EDuration="40.497628353s" podCreationTimestamp="2024-12-13 01:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:35.495002801 +0000 UTC m=+52.765057991" watchObservedRunningTime="2024-12-13 01:56:35.497628353 +0000 UTC m=+52.767683531" Dec 13 01:56:35.555911 systemd-networkd[1926]: calie6ff55a407b: Gained IPv6LL Dec 13 01:56:35.556459 systemd-networkd[1926]: cali6825cbc9625: Gained IPv6LL Dec 13 01:56:36.483565 containerd[2044]: time="2024-12-13T01:56:36.483471714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:36.488047 containerd[2044]: time="2024-12-13T01:56:36.487951110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:56:36.495327 containerd[2044]: time="2024-12-13T01:56:36.495075390Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:36.510690 containerd[2044]: time="2024-12-13T01:56:36.509051454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:36.513845 containerd[2044]: time="2024-12-13T01:56:36.513759246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.467406124s" Dec 13 01:56:36.514428 containerd[2044]: time="2024-12-13T01:56:36.514192002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:56:36.521241 containerd[2044]: time="2024-12-13T01:56:36.521138742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:56:36.538285 containerd[2044]: time="2024-12-13T01:56:36.538190982Z" level=info msg="CreateContainer within sandbox \"1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:56:36.591238 containerd[2044]: time="2024-12-13T01:56:36.590051454Z" level=info msg="CreateContainer within sandbox \"1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe7e1e87c5b0a73e3baf7aef68c7c7e335b8955cd1ef84cae27615b7d1d42dc5\"" Dec 13 01:56:36.592594 containerd[2044]: time="2024-12-13T01:56:36.591607734Z" level=info msg="StartContainer for \"fe7e1e87c5b0a73e3baf7aef68c7c7e335b8955cd1ef84cae27615b7d1d42dc5\"" Dec 13 01:56:36.696341 systemd[1]: Started cri-containerd-fe7e1e87c5b0a73e3baf7aef68c7c7e335b8955cd1ef84cae27615b7d1d42dc5.scope - libcontainer container fe7e1e87c5b0a73e3baf7aef68c7c7e335b8955cd1ef84cae27615b7d1d42dc5. Dec 13 01:56:36.859208 containerd[2044]: time="2024-12-13T01:56:36.859130564Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:36.867624 containerd[2044]: time="2024-12-13T01:56:36.867548504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:56:36.877825 containerd[2044]: time="2024-12-13T01:56:36.877010972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 355.78253ms" Dec 13 01:56:36.877825 containerd[2044]: time="2024-12-13T01:56:36.877090616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:56:36.884210 containerd[2044]: time="2024-12-13T01:56:36.884113976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:56:36.888242 containerd[2044]: time="2024-12-13T01:56:36.886163828Z" level=info msg="CreateContainer within sandbox \"24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:56:36.908743 containerd[2044]: time="2024-12-13T01:56:36.907929608Z" level=info msg="StartContainer for \"fe7e1e87c5b0a73e3baf7aef68c7c7e335b8955cd1ef84cae27615b7d1d42dc5\" returns successfully" Dec 13 01:56:36.940186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101996656.mount: Deactivated successfully. Dec 13 01:56:36.947743 containerd[2044]: time="2024-12-13T01:56:36.946066412Z" level=info msg="CreateContainer within sandbox \"24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0734913f5d0cf9a6a4cfb58a4ca87a4c7cb3a16976c77bb7c028abbbb2e36ef3\"" Dec 13 01:56:36.951376 containerd[2044]: time="2024-12-13T01:56:36.951281684Z" level=info msg="StartContainer for \"0734913f5d0cf9a6a4cfb58a4ca87a4c7cb3a16976c77bb7c028abbbb2e36ef3\"" Dec 13 01:56:37.055118 systemd[1]: Started cri-containerd-0734913f5d0cf9a6a4cfb58a4ca87a4c7cb3a16976c77bb7c028abbbb2e36ef3.scope - libcontainer container 0734913f5d0cf9a6a4cfb58a4ca87a4c7cb3a16976c77bb7c028abbbb2e36ef3. Dec 13 01:56:37.233594 containerd[2044]: time="2024-12-13T01:56:37.233398649Z" level=info msg="StartContainer for \"0734913f5d0cf9a6a4cfb58a4ca87a4c7cb3a16976c77bb7c028abbbb2e36ef3\" returns successfully" Dec 13 01:56:37.517368 kubelet[3281]: I1213 01:56:37.517230 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f77bf5bb4-59lz9" podStartSLOduration=27.824517544 podStartE2EDuration="31.517198987s" podCreationTimestamp="2024-12-13 01:56:06 +0000 UTC" firstStartedPulling="2024-12-13 01:56:33.188999221 +0000 UTC m=+50.459054375" lastFinishedPulling="2024-12-13 01:56:36.881680604 +0000 UTC m=+54.151735818" observedRunningTime="2024-12-13 01:56:37.517031899 +0000 UTC m=+54.787087149" watchObservedRunningTime="2024-12-13 01:56:37.517198987 +0000 UTC m=+54.787254177" Dec 13 01:56:37.589413 kubelet[3281]: I1213 01:56:37.588884 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f77bf5bb4-pchds" podStartSLOduration=26.895038196 podStartE2EDuration="31.588859627s" podCreationTimestamp="2024-12-13 01:56:06 +0000 UTC" firstStartedPulling="2024-12-13 01:56:31.825789963 +0000 UTC m=+49.095845129" lastFinishedPulling="2024-12-13 01:56:36.51961113 +0000 UTC m=+53.789666560" observedRunningTime="2024-12-13 01:56:37.581917003 +0000 UTC m=+54.851972169" watchObservedRunningTime="2024-12-13 01:56:37.588859627 +0000 UTC m=+54.858914793" Dec 13 01:56:37.835795 ntpd[2000]: Listen normally on 8 vxlan.calico 192.168.81.0:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 8 vxlan.calico 192.168.81.0:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 9 vxlan.calico [fe80::6441:96ff:fefa:df5a%4]:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 10 cali633377fa96c [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 11 califf4471e1f32 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 12 calibc650ef2424 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 13 cali7e037259348 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 14 calie6ff55a407b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:56:37.837332 ntpd[2000]: 13 Dec 01:56:37 ntpd[2000]: Listen normally on 15 cali6825cbc9625 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:56:37.835923 ntpd[2000]: Listen normally on 9 vxlan.calico [fe80::6441:96ff:fefa:df5a%4]:123 Dec 13 01:56:37.836005 ntpd[2000]: Listen normally on 10 cali633377fa96c [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:56:37.836081 ntpd[2000]: Listen normally on 11 califf4471e1f32 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:56:37.836207 ntpd[2000]: Listen normally on 12 calibc650ef2424 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:56:37.836295 ntpd[2000]: Listen normally on 13 cali7e037259348 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:56:37.836393 ntpd[2000]: Listen normally on 14 calie6ff55a407b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:56:37.836465 ntpd[2000]: Listen normally on 15 cali6825cbc9625 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:56:38.498770 kubelet[3281]: I1213 01:56:38.498723 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:38.500022 kubelet[3281]: I1213 01:56:38.498723 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:38.911383 systemd[1]: Started sshd@12-172.31.19.88:22-139.178.68.195:42508.service - OpenSSH per-connection server daemon (139.178.68.195:42508). Dec 13 01:56:39.160400 sshd[5652]: Accepted publickey for core from 139.178.68.195 port 42508 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:39.167905 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:39.184615 systemd-logind[2008]: New session 13 of user core. Dec 13 01:56:39.191957 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:56:39.718730 sshd[5652]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:39.729414 systemd[1]: sshd@12-172.31.19.88:22-139.178.68.195:42508.service: Deactivated successfully. Dec 13 01:56:39.739336 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:56:39.748585 systemd-logind[2008]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:56:39.753544 systemd-logind[2008]: Removed session 13. Dec 13 01:56:40.066941 containerd[2044]: time="2024-12-13T01:56:40.066699044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.071071 containerd[2044]: time="2024-12-13T01:56:40.070055972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:56:40.071947 containerd[2044]: time="2024-12-13T01:56:40.071586920Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.083955 containerd[2044]: time="2024-12-13T01:56:40.083884484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:40.087769 containerd[2044]: time="2024-12-13T01:56:40.087693032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.20349472s" Dec 13 01:56:40.088231 containerd[2044]: time="2024-12-13T01:56:40.087990800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:56:40.091863 containerd[2044]: time="2024-12-13T01:56:40.091711412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:56:40.136307 containerd[2044]: time="2024-12-13T01:56:40.136026824Z" level=info msg="CreateContainer within sandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:56:40.172156 containerd[2044]: time="2024-12-13T01:56:40.172074584Z" level=info msg="CreateContainer within sandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\"" Dec 13 01:56:40.173918 containerd[2044]: time="2024-12-13T01:56:40.173281940Z" level=info msg="StartContainer for \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\"" Dec 13 01:56:40.282336 systemd[1]: Started cri-containerd-b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9.scope - libcontainer container b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9. Dec 13 01:56:40.501960 containerd[2044]: time="2024-12-13T01:56:40.501897286Z" level=info msg="StartContainer for \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\" returns successfully" Dec 13 01:56:41.637330 systemd[1]: run-containerd-runc-k8s.io-b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9-runc.MjtVNX.mount: Deactivated successfully. Dec 13 01:56:41.790996 containerd[2044]: time="2024-12-13T01:56:41.790916148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:41.794100 containerd[2044]: time="2024-12-13T01:56:41.794034372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:56:41.800621 containerd[2044]: time="2024-12-13T01:56:41.798746892Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:41.806388 containerd[2044]: time="2024-12-13T01:56:41.804717180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:41.809284 containerd[2044]: time="2024-12-13T01:56:41.809222316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.717445372s" Dec 13 01:56:41.809504 containerd[2044]: time="2024-12-13T01:56:41.809471772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:56:41.815147 containerd[2044]: time="2024-12-13T01:56:41.815086200Z" level=info msg="CreateContainer within sandbox \"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:56:41.850970 containerd[2044]: time="2024-12-13T01:56:41.850892028Z" level=info msg="CreateContainer within sandbox \"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b2318c92348a2be6a59aee6ae37a0a11b4fe2a6bf6b40b12bb978969d11c5d6\"" Dec 13 01:56:41.852708 containerd[2044]: time="2024-12-13T01:56:41.852601308Z" level=info msg="StartContainer for \"9b2318c92348a2be6a59aee6ae37a0a11b4fe2a6bf6b40b12bb978969d11c5d6\"" Dec 13 01:56:41.906976 kubelet[3281]: I1213 01:56:41.905582 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56cc9599d8-srdrd" podStartSLOduration=30.946134415 podStartE2EDuration="36.905559349s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="2024-12-13 01:56:34.131332238 +0000 UTC m=+51.401387392" lastFinishedPulling="2024-12-13 01:56:40.090757076 +0000 UTC m=+57.360812326" observedRunningTime="2024-12-13 01:56:40.550999582 +0000 UTC m=+57.821054760" watchObservedRunningTime="2024-12-13 01:56:41.905559349 +0000 UTC m=+59.175614527" Dec 13 01:56:41.939977 systemd[1]: Started cri-containerd-9b2318c92348a2be6a59aee6ae37a0a11b4fe2a6bf6b40b12bb978969d11c5d6.scope - libcontainer container 9b2318c92348a2be6a59aee6ae37a0a11b4fe2a6bf6b40b12bb978969d11c5d6. Dec 13 01:56:42.078176 containerd[2044]: time="2024-12-13T01:56:42.078065998Z" level=info msg="StartContainer for \"9b2318c92348a2be6a59aee6ae37a0a11b4fe2a6bf6b40b12bb978969d11c5d6\" returns successfully" Dec 13 01:56:42.199811 kubelet[3281]: I1213 01:56:42.199124 3281 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:56:42.199811 kubelet[3281]: I1213 01:56:42.199183 3281 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:56:42.562238 kubelet[3281]: I1213 01:56:42.561002 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v8ld8" podStartSLOduration=27.144781704 podStartE2EDuration="37.5609808s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="2024-12-13 01:56:31.394748448 +0000 UTC m=+48.664803602" lastFinishedPulling="2024-12-13 01:56:41.810947544 +0000 UTC m=+59.081002698" observedRunningTime="2024-12-13 01:56:42.56083146 +0000 UTC m=+59.830886662" watchObservedRunningTime="2024-12-13 01:56:42.5609808 +0000 UTC m=+59.831035954" Dec 13 01:56:43.004381 containerd[2044]: time="2024-12-13T01:56:43.003780202Z" level=info msg="StopPodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\"" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.101 [WARNING][5809] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807", Pod:"csi-node-driver-v8ld8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali633377fa96c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.101 [INFO][5809] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.101 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" iface="eth0" netns="" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.101 [INFO][5809] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.101 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.165 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.166 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.166 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.184 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.184 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.189 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:43.196726 containerd[2044]: 2024-12-13 01:56:43.192 [INFO][5809] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.198462 containerd[2044]: time="2024-12-13T01:56:43.196811807Z" level=info msg="TearDown network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" successfully" Dec 13 01:56:43.198462 containerd[2044]: time="2024-12-13T01:56:43.196847711Z" level=info msg="StopPodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" returns successfully" Dec 13 01:56:43.199325 containerd[2044]: time="2024-12-13T01:56:43.199263455Z" level=info msg="RemovePodSandbox for \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\"" Dec 13 01:56:43.199459 containerd[2044]: time="2024-12-13T01:56:43.199328483Z" level=info msg="Forcibly stopping sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\"" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.270 [WARNING][5835] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fdcc0f22-5979-4fd2-8ab6-4d3cbd4e07e6", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"641213fd69e284f39da2a154504550f377af6e91a7724823a43d1dd0d3876807", Pod:"csi-node-driver-v8ld8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali633377fa96c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.271 [INFO][5835] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.271 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" iface="eth0" netns="" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.271 [INFO][5835] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.271 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.305 [INFO][5842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.305 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.305 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.319 [WARNING][5842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.319 [INFO][5842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" HandleID="k8s-pod-network.6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Workload="ip--172--31--19--88-k8s-csi--node--driver--v8ld8-eth0" Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.322 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:43.327172 containerd[2044]: 2024-12-13 01:56:43.324 [INFO][5835] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077" Dec 13 01:56:43.328122 containerd[2044]: time="2024-12-13T01:56:43.327190344Z" level=info msg="TearDown network for sandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" successfully" Dec 13 01:56:43.338522 containerd[2044]: time="2024-12-13T01:56:43.338442204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:43.338711 containerd[2044]: time="2024-12-13T01:56:43.338566848Z" level=info msg="RemovePodSandbox \"6343f62a489e6a0939a4e36207b4c1845b6b06427a4163c5c01bd4847534b077\" returns successfully" Dec 13 01:56:43.339693 containerd[2044]: time="2024-12-13T01:56:43.339592056Z" level=info msg="StopPodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\"" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.413 [WARNING][5861] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9859ed10-6294-4331-aad0-3ead71dc6b50", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac", Pod:"calico-apiserver-f77bf5bb4-59lz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e037259348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.414 [INFO][5861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.414 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" iface="eth0" netns="" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.414 [INFO][5861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.414 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.460 [INFO][5868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.461 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.461 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.473 [WARNING][5868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.474 [INFO][5868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.476 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:43.481938 containerd[2044]: 2024-12-13 01:56:43.478 [INFO][5861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.484104 containerd[2044]: time="2024-12-13T01:56:43.481983817Z" level=info msg="TearDown network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" successfully" Dec 13 01:56:43.484104 containerd[2044]: time="2024-12-13T01:56:43.482020873Z" level=info msg="StopPodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" returns successfully" Dec 13 01:56:43.484104 containerd[2044]: time="2024-12-13T01:56:43.483294745Z" level=info msg="RemovePodSandbox for \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\"" Dec 13 01:56:43.484104 containerd[2044]: time="2024-12-13T01:56:43.483344461Z" level=info msg="Forcibly stopping sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\"" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.554 [WARNING][5886] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"9859ed10-6294-4331-aad0-3ead71dc6b50", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"24f22e06d706ebda34facee93c034c96fdcadc38343a42dbf226a554e7dca6ac", Pod:"calico-apiserver-f77bf5bb4-59lz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e037259348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.555 [INFO][5886] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.555 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" iface="eth0" netns="" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.555 [INFO][5886] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.555 [INFO][5886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.597 [INFO][5893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.598 [INFO][5893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.598 [INFO][5893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.610 [WARNING][5893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.610 [INFO][5893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" HandleID="k8s-pod-network.9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--59lz9-eth0" Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.615 [INFO][5893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:43.623022 containerd[2044]: 2024-12-13 01:56:43.618 [INFO][5886] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348" Dec 13 01:56:43.623022 containerd[2044]: time="2024-12-13T01:56:43.622971697Z" level=info msg="TearDown network for sandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" successfully" Dec 13 01:56:43.638167 containerd[2044]: time="2024-12-13T01:56:43.637875601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:43.639388 containerd[2044]: time="2024-12-13T01:56:43.639337141Z" level=info msg="RemovePodSandbox \"9974a3da7b8c2da0a2a31d158eadbbb3c223490d530df5cf71f5d9d6e5d77348\" returns successfully" Dec 13 01:56:43.643150 containerd[2044]: time="2024-12-13T01:56:43.642807565Z" level=info msg="StopPodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\"" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.745 [WARNING][5913] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0", GenerateName:"calico-kube-controllers-56cc9599d8-", Namespace:"calico-system", SelfLink:"", UID:"1fd116f0-a6fd-4513-9f18-4fa2b846559e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc9599d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6", Pod:"calico-kube-controllers-56cc9599d8-srdrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6ff55a407b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.745 [INFO][5913] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.745 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" iface="eth0" netns="" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.745 [INFO][5913] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.745 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.781 [INFO][5921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.782 [INFO][5921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.782 [INFO][5921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.797 [WARNING][5921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.797 [INFO][5921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.801 [INFO][5921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:43.806717 containerd[2044]: 2024-12-13 01:56:43.803 [INFO][5913] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.807700 containerd[2044]: time="2024-12-13T01:56:43.806947118Z" level=info msg="TearDown network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" successfully" Dec 13 01:56:43.807700 containerd[2044]: time="2024-12-13T01:56:43.806994062Z" level=info msg="StopPodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" returns successfully" Dec 13 01:56:43.808215 containerd[2044]: time="2024-12-13T01:56:43.808167866Z" level=info msg="RemovePodSandbox for \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\"" Dec 13 01:56:43.808297 containerd[2044]: time="2024-12-13T01:56:43.808229726Z" level=info msg="Forcibly stopping sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\"" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.882 [WARNING][5939] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0", GenerateName:"calico-kube-controllers-56cc9599d8-", Namespace:"calico-system", SelfLink:"", UID:"1fd116f0-a6fd-4513-9f18-4fa2b846559e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc9599d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6", Pod:"calico-kube-controllers-56cc9599d8-srdrd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6ff55a407b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.883 [INFO][5939] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.883 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" iface="eth0" netns="" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.883 [INFO][5939] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.883 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.922 [INFO][5945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.923 [INFO][5945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.923 [INFO][5945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.934 [WARNING][5945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.934 [INFO][5945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" HandleID="k8s-pod-network.348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.937 [INFO][5945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:43.942110 containerd[2044]: 2024-12-13 01:56:43.939 [INFO][5939] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2" Dec 13 01:56:43.942110 containerd[2044]: time="2024-12-13T01:56:43.942066555Z" level=info msg="TearDown network for sandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" successfully" Dec 13 01:56:43.950962 containerd[2044]: time="2024-12-13T01:56:43.950831115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:43.951121 containerd[2044]: time="2024-12-13T01:56:43.950988387Z" level=info msg="RemovePodSandbox \"348ee4163ca919b6ea0414623c8e18a78def147ae4152a3dfe9a6b62d6c85ce2\" returns successfully" Dec 13 01:56:43.951830 containerd[2044]: time="2024-12-13T01:56:43.951689907Z" level=info msg="StopPodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\"" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.021 [WARNING][5963] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8404c66f-b027-4878-814b-c22b0f9622a6", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432", Pod:"coredns-7db6d8ff4d-tw7vs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf4471e1f32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.022 [INFO][5963] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.022 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" iface="eth0" netns="" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.022 [INFO][5963] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.022 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.061 [INFO][5969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.062 [INFO][5969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.062 [INFO][5969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.077 [WARNING][5969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.077 [INFO][5969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.080 [INFO][5969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:44.085928 containerd[2044]: 2024-12-13 01:56:44.083 [INFO][5963] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.087168 containerd[2044]: time="2024-12-13T01:56:44.086010732Z" level=info msg="TearDown network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" successfully" Dec 13 01:56:44.087168 containerd[2044]: time="2024-12-13T01:56:44.086052828Z" level=info msg="StopPodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" returns successfully" Dec 13 01:56:44.087559 containerd[2044]: time="2024-12-13T01:56:44.087486348Z" level=info msg="RemovePodSandbox for \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\"" Dec 13 01:56:44.087700 containerd[2044]: time="2024-12-13T01:56:44.087586308Z" level=info msg="Forcibly stopping sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\"" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.158 [WARNING][5987] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8404c66f-b027-4878-814b-c22b0f9622a6", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"12f00f000701afe9786ae9702c83267cddb07b29062ffffd28a2da1f4bf88432", Pod:"coredns-7db6d8ff4d-tw7vs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf4471e1f32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.158 [INFO][5987] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.158 [INFO][5987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" iface="eth0" netns="" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.158 [INFO][5987] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.158 [INFO][5987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.201 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.201 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.201 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.213 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.213 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" HandleID="k8s-pod-network.f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--tw7vs-eth0" Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.217 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:44.222058 containerd[2044]: 2024-12-13 01:56:44.219 [INFO][5987] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0" Dec 13 01:56:44.222058 containerd[2044]: time="2024-12-13T01:56:44.222031116Z" level=info msg="TearDown network for sandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" successfully" Dec 13 01:56:44.230245 containerd[2044]: time="2024-12-13T01:56:44.230142816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:44.230415 containerd[2044]: time="2024-12-13T01:56:44.230315472Z" level=info msg="RemovePodSandbox \"f91e81b3e8238e30b5ef4a963d81c9024105c7ff757c8a2452b82c53d2342fe0\" returns successfully" Dec 13 01:56:44.231514 containerd[2044]: time="2024-12-13T01:56:44.231450300Z" level=info msg="StopPodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\"" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.300 [WARNING][6013] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"939dd521-1757-4ed9-83b7-813ae796a6af", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c", Pod:"calico-apiserver-f77bf5bb4-pchds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc650ef2424", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.301 [INFO][6013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.301 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" iface="eth0" netns="" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.301 [INFO][6013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.301 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.339 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.339 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.339 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.353 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.353 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.356 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:44.361237 containerd[2044]: 2024-12-13 01:56:44.359 [INFO][6013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.361237 containerd[2044]: time="2024-12-13T01:56:44.361196089Z" level=info msg="TearDown network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" successfully" Dec 13 01:56:44.361237 containerd[2044]: time="2024-12-13T01:56:44.361233445Z" level=info msg="StopPodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" returns successfully" Dec 13 01:56:44.363781 containerd[2044]: time="2024-12-13T01:56:44.362949145Z" level=info msg="RemovePodSandbox for \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\"" Dec 13 01:56:44.363781 containerd[2044]: time="2024-12-13T01:56:44.363031261Z" level=info msg="Forcibly stopping sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\"" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.436 [WARNING][6037] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0", GenerateName:"calico-apiserver-f77bf5bb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"939dd521-1757-4ed9-83b7-813ae796a6af", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f77bf5bb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"1af8f7b5742a723cb3f9991281dd30421dc4f346f77cc9d48d7ada6eb6eb940c", Pod:"calico-apiserver-f77bf5bb4-pchds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc650ef2424", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.436 [INFO][6037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.436 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" iface="eth0" netns="" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.436 [INFO][6037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.436 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.494 [INFO][6044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.495 [INFO][6044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.495 [INFO][6044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.507 [WARNING][6044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.508 [INFO][6044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" HandleID="k8s-pod-network.6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Workload="ip--172--31--19--88-k8s-calico--apiserver--f77bf5bb4--pchds-eth0" Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.511 [INFO][6044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:44.517576 containerd[2044]: 2024-12-13 01:56:44.514 [INFO][6037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2" Dec 13 01:56:44.519001 containerd[2044]: time="2024-12-13T01:56:44.517610462Z" level=info msg="TearDown network for sandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" successfully" Dec 13 01:56:44.525581 containerd[2044]: time="2024-12-13T01:56:44.525501710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:44.525970 containerd[2044]: time="2024-12-13T01:56:44.525602210Z" level=info msg="RemovePodSandbox \"6b469c359687ff581b19abcb9d9e54e1f484768230c3635e784b7b8dfc63ead2\" returns successfully" Dec 13 01:56:44.527421 containerd[2044]: time="2024-12-13T01:56:44.527161082Z" level=info msg="StopPodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\"" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.618 [WARNING][6062] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434", Pod:"coredns-7db6d8ff4d-nxrmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6825cbc9625", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.619 [INFO][6062] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.619 [INFO][6062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" iface="eth0" netns="" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.619 [INFO][6062] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.619 [INFO][6062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.663 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.663 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.663 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.677 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.677 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.680 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:44.686117 containerd[2044]: 2024-12-13 01:56:44.683 [INFO][6062] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.687905 containerd[2044]: time="2024-12-13T01:56:44.686247578Z" level=info msg="TearDown network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" successfully" Dec 13 01:56:44.687905 containerd[2044]: time="2024-12-13T01:56:44.686312090Z" level=info msg="StopPodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" returns successfully" Dec 13 01:56:44.687905 containerd[2044]: time="2024-12-13T01:56:44.687503391Z" level=info msg="RemovePodSandbox for \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\"" Dec 13 01:56:44.687905 containerd[2044]: time="2024-12-13T01:56:44.687555399Z" level=info msg="Forcibly stopping sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\"" Dec 13 01:56:44.767139 systemd[1]: Started sshd@13-172.31.19.88:22-139.178.68.195:42512.service - OpenSSH per-connection server daemon (139.178.68.195:42512). Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.754 [WARNING][6086] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de6f5a-467f-4f9f-aa0a-c0c83f71a31b", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"dad8a9993c4ea1ad237dbc70bd5e8f4bbaf382e9ce06778f89931802a66e3434", Pod:"coredns-7db6d8ff4d-nxrmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6825cbc9625", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.754 [INFO][6086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.754 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" iface="eth0" netns="" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.754 [INFO][6086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.754 [INFO][6086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.807 [INFO][6093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.809 [INFO][6093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.809 [INFO][6093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.827 [WARNING][6093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.827 [INFO][6093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" HandleID="k8s-pod-network.98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Workload="ip--172--31--19--88-k8s-coredns--7db6d8ff4d--nxrmk-eth0" Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.829 [INFO][6093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:44.834216 containerd[2044]: 2024-12-13 01:56:44.831 [INFO][6086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645" Dec 13 01:56:44.834216 containerd[2044]: time="2024-12-13T01:56:44.833993631Z" level=info msg="TearDown network for sandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" successfully" Dec 13 01:56:44.840945 containerd[2044]: time="2024-12-13T01:56:44.840379035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:44.840945 containerd[2044]: time="2024-12-13T01:56:44.840481923Z" level=info msg="RemovePodSandbox \"98f4194494843fe42de89ef80c359c29f830c16fccb065086e8bacad3d5c1645\" returns successfully" Dec 13 01:56:44.962758 sshd[6097]: Accepted publickey for core from 139.178.68.195 port 42512 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:44.966003 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:44.974896 systemd-logind[2008]: New session 14 of user core. Dec 13 01:56:44.980013 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:56:45.248747 sshd[6097]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:45.256982 systemd[1]: sshd@13-172.31.19.88:22-139.178.68.195:42512.service: Deactivated successfully. Dec 13 01:56:45.263814 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:56:45.266882 systemd-logind[2008]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:56:45.268885 systemd-logind[2008]: Removed session 14. Dec 13 01:56:45.971171 kubelet[3281]: I1213 01:56:45.970672 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:49.538624 kubelet[3281]: I1213 01:56:49.537796 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:56:50.289301 systemd[1]: Started sshd@14-172.31.19.88:22-139.178.68.195:46606.service - OpenSSH per-connection server daemon (139.178.68.195:46606). Dec 13 01:56:50.460676 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 46606 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:50.463334 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:50.471811 systemd-logind[2008]: New session 15 of user core. Dec 13 01:56:50.477925 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:56:50.745783 sshd[6142]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:50.752333 systemd[1]: sshd@14-172.31.19.88:22-139.178.68.195:46606.service: Deactivated successfully. Dec 13 01:56:50.758258 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:56:50.763525 systemd-logind[2008]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:56:50.765697 systemd-logind[2008]: Removed session 15. Dec 13 01:56:55.786013 systemd[1]: Started sshd@15-172.31.19.88:22-139.178.68.195:46622.service - OpenSSH per-connection server daemon (139.178.68.195:46622). Dec 13 01:56:55.969439 sshd[6179]: Accepted publickey for core from 139.178.68.195 port 46622 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:55.973913 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:55.983146 systemd-logind[2008]: New session 16 of user core. Dec 13 01:56:55.992872 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:56:56.263964 sshd[6179]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:56.270384 systemd[1]: sshd@15-172.31.19.88:22-139.178.68.195:46622.service: Deactivated successfully. Dec 13 01:56:56.276901 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:56:56.280827 systemd-logind[2008]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:56:56.283534 systemd-logind[2008]: Removed session 16. Dec 13 01:56:56.303298 systemd[1]: Started sshd@16-172.31.19.88:22-139.178.68.195:53580.service - OpenSSH per-connection server daemon (139.178.68.195:53580). Dec 13 01:56:56.479240 sshd[6192]: Accepted publickey for core from 139.178.68.195 port 53580 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:56.482070 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:56.490113 systemd-logind[2008]: New session 17 of user core. Dec 13 01:56:56.497913 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:56:57.080809 sshd[6192]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:57.090169 systemd[1]: sshd@16-172.31.19.88:22-139.178.68.195:53580.service: Deactivated successfully. Dec 13 01:56:57.099950 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:56:57.127091 systemd-logind[2008]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:56:57.136233 systemd[1]: Started sshd@17-172.31.19.88:22-139.178.68.195:53586.service - OpenSSH per-connection server daemon (139.178.68.195:53586). Dec 13 01:56:57.141514 systemd-logind[2008]: Removed session 17. Dec 13 01:56:57.206890 containerd[2044]: time="2024-12-13T01:56:57.206710141Z" level=info msg="StopContainer for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" with timeout 300 (s)" Dec 13 01:56:57.211075 containerd[2044]: time="2024-12-13T01:56:57.210283009Z" level=info msg="Stop container \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" with signal terminated" Dec 13 01:56:57.355347 sshd[6203]: Accepted publickey for core from 139.178.68.195 port 53586 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:57.361048 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:57.373949 systemd-logind[2008]: New session 18 of user core. Dec 13 01:56:57.406110 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:56:57.791810 containerd[2044]: time="2024-12-13T01:56:57.790198852Z" level=info msg="StopContainer for \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\" with timeout 30 (s)" Dec 13 01:56:57.792366 containerd[2044]: time="2024-12-13T01:56:57.792279760Z" level=info msg="Stop container \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\" with signal terminated" Dec 13 01:56:57.814070 systemd[1]: cri-containerd-b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9.scope: Deactivated successfully. Dec 13 01:56:57.893315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9-rootfs.mount: Deactivated successfully. Dec 13 01:56:57.909442 containerd[2044]: time="2024-12-13T01:56:57.908509036Z" level=info msg="shim disconnected" id=b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9 namespace=k8s.io Dec 13 01:56:57.909442 containerd[2044]: time="2024-12-13T01:56:57.908593828Z" level=warning msg="cleaning up after shim disconnected" id=b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9 namespace=k8s.io Dec 13 01:56:57.909442 containerd[2044]: time="2024-12-13T01:56:57.908618728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:58.000147 containerd[2044]: time="2024-12-13T01:56:58.000093049Z" level=info msg="StopContainer for \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\" returns successfully" Dec 13 01:56:58.008035 containerd[2044]: time="2024-12-13T01:56:58.007804153Z" level=info msg="StopPodSandbox for \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\"" Dec 13 01:56:58.008035 containerd[2044]: time="2024-12-13T01:56:58.007877857Z" level=info msg="Container to stop \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:58.024022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6-shm.mount: Deactivated successfully. Dec 13 01:56:58.067627 systemd[1]: cri-containerd-880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6.scope: Deactivated successfully. Dec 13 01:56:58.151274 containerd[2044]: time="2024-12-13T01:56:58.151173769Z" level=info msg="shim disconnected" id=880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6 namespace=k8s.io Dec 13 01:56:58.154274 containerd[2044]: time="2024-12-13T01:56:58.153727465Z" level=warning msg="cleaning up after shim disconnected" id=880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6 namespace=k8s.io Dec 13 01:56:58.154274 containerd[2044]: time="2024-12-13T01:56:58.154108237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:58.154903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6-rootfs.mount: Deactivated successfully. Dec 13 01:56:58.604327 systemd-networkd[1926]: calie6ff55a407b: Link DOWN Dec 13 01:56:58.604341 systemd-networkd[1926]: calie6ff55a407b: Lost carrier Dec 13 01:56:58.649555 kubelet[3281]: I1213 01:56:58.649262 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.599 [INFO][6309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.600 [INFO][6309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" iface="eth0" netns="/var/run/netns/cni-07036a14-98e8-112c-e69f-6a0db9ad4a3e" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.600 [INFO][6309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" iface="eth0" netns="/var/run/netns/cni-07036a14-98e8-112c-e69f-6a0db9ad4a3e" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.612 [INFO][6309] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" after=12.612996ms iface="eth0" netns="/var/run/netns/cni-07036a14-98e8-112c-e69f-6a0db9ad4a3e" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.613 [INFO][6309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.613 [INFO][6309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.710 [INFO][6320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.712 [INFO][6320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.713 [INFO][6320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.960 [INFO][6320] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.960 [INFO][6320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.965 [INFO][6320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:58.981728 containerd[2044]: 2024-12-13 01:56:58.969 [INFO][6309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:56:58.981728 containerd[2044]: time="2024-12-13T01:56:58.975229565Z" level=info msg="TearDown network for sandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" successfully" Dec 13 01:56:58.981728 containerd[2044]: time="2024-12-13T01:56:58.975271265Z" level=info msg="StopPodSandbox for \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" returns successfully" Dec 13 01:56:58.994370 systemd[1]: run-netns-cni\x2d07036a14\x2d98e8\x2d112c\x2de69f\x2d6a0db9ad4a3e.mount: Deactivated successfully. Dec 13 01:56:59.095831 kubelet[3281]: I1213 01:56:59.095756 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkdbg\" (UniqueName: \"kubernetes.io/projected/1fd116f0-a6fd-4513-9f18-4fa2b846559e-kube-api-access-wkdbg\") pod \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\" (UID: \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\") " Dec 13 01:56:59.096015 kubelet[3281]: I1213 01:56:59.095843 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fd116f0-a6fd-4513-9f18-4fa2b846559e-tigera-ca-bundle\") pod \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\" (UID: \"1fd116f0-a6fd-4513-9f18-4fa2b846559e\") " Dec 13 01:56:59.108213 kubelet[3281]: I1213 01:56:59.107914 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd116f0-a6fd-4513-9f18-4fa2b846559e-kube-api-access-wkdbg" (OuterVolumeSpecName: "kube-api-access-wkdbg") pod "1fd116f0-a6fd-4513-9f18-4fa2b846559e" (UID: "1fd116f0-a6fd-4513-9f18-4fa2b846559e"). InnerVolumeSpecName "kube-api-access-wkdbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:59.111166 systemd[1]: var-lib-kubelet-pods-1fd116f0\x2da6fd\x2d4513\x2d9f18\x2d4fa2b846559e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwkdbg.mount: Deactivated successfully. Dec 13 01:56:59.121759 kubelet[3281]: I1213 01:56:59.121543 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd116f0-a6fd-4513-9f18-4fa2b846559e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1fd116f0-a6fd-4513-9f18-4fa2b846559e" (UID: "1fd116f0-a6fd-4513-9f18-4fa2b846559e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:56:59.122935 systemd[1]: var-lib-kubelet-pods-1fd116f0\x2da6fd\x2d4513\x2d9f18\x2d4fa2b846559e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Dec 13 01:56:59.196208 kubelet[3281]: I1213 01:56:59.196146 3281 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fd116f0-a6fd-4513-9f18-4fa2b846559e-tigera-ca-bundle\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:56:59.196208 kubelet[3281]: I1213 01:56:59.196202 3281 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wkdbg\" (UniqueName: \"kubernetes.io/projected/1fd116f0-a6fd-4513-9f18-4fa2b846559e-kube-api-access-wkdbg\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:56:59.668364 systemd[1]: Removed slice kubepods-besteffort-pod1fd116f0_a6fd_4513_9f18_4fa2b846559e.slice - libcontainer container kubepods-besteffort-pod1fd116f0_a6fd_4513_9f18_4fa2b846559e.slice. Dec 13 01:56:59.809781 kubelet[3281]: I1213 01:56:59.809692 3281 topology_manager.go:215] "Topology Admit Handler" podUID="a089a448-266f-45f0-83ae-3b0d411c1886" podNamespace="calico-system" podName="calico-kube-controllers-5b5fbbd4b-vhkl2" Dec 13 01:56:59.812440 kubelet[3281]: E1213 01:56:59.811564 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1fd116f0-a6fd-4513-9f18-4fa2b846559e" containerName="calico-kube-controllers" Dec 13 01:56:59.812440 kubelet[3281]: I1213 01:56:59.811779 3281 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd116f0-a6fd-4513-9f18-4fa2b846559e" containerName="calico-kube-controllers" Dec 13 01:56:59.833284 systemd[1]: Created slice kubepods-besteffort-poda089a448_266f_45f0_83ae_3b0d411c1886.slice - libcontainer container kubepods-besteffort-poda089a448_266f_45f0_83ae_3b0d411c1886.slice. Dec 13 01:56:59.901347 kubelet[3281]: I1213 01:56:59.901271 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7jp\" (UniqueName: \"kubernetes.io/projected/a089a448-266f-45f0-83ae-3b0d411c1886-kube-api-access-pb7jp\") pod \"calico-kube-controllers-5b5fbbd4b-vhkl2\" (UID: \"a089a448-266f-45f0-83ae-3b0d411c1886\") " pod="calico-system/calico-kube-controllers-5b5fbbd4b-vhkl2" Dec 13 01:56:59.901519 kubelet[3281]: I1213 01:56:59.901356 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a089a448-266f-45f0-83ae-3b0d411c1886-tigera-ca-bundle\") pod \"calico-kube-controllers-5b5fbbd4b-vhkl2\" (UID: \"a089a448-266f-45f0-83ae-3b0d411c1886\") " pod="calico-system/calico-kube-controllers-5b5fbbd4b-vhkl2" Dec 13 01:57:00.142614 containerd[2044]: time="2024-12-13T01:57:00.142533999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b5fbbd4b-vhkl2,Uid:a089a448-266f-45f0-83ae-3b0d411c1886,Namespace:calico-system,Attempt:0,}" Dec 13 01:57:00.587591 (udev-worker)[6321]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:57:00.594082 systemd-networkd[1926]: calibed28f5f2b9: Link UP Dec 13 01:57:00.595980 systemd-networkd[1926]: calibed28f5f2b9: Gained carrier Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.305 [INFO][6350] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0 calico-kube-controllers-5b5fbbd4b- calico-system a089a448-266f-45f0-83ae-3b0d411c1886 1211 0 2024-12-13 01:56:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b5fbbd4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-88 calico-kube-controllers-5b5fbbd4b-vhkl2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibed28f5f2b9 [] []}} ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.305 [INFO][6350] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.394 [INFO][6360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" HandleID="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.432 [INFO][6360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" HandleID="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cc30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-88", "pod":"calico-kube-controllers-5b5fbbd4b-vhkl2", "timestamp":"2024-12-13 01:57:00.394600865 +0000 UTC"}, Hostname:"ip-172-31-19-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.432 [INFO][6360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.432 [INFO][6360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.432 [INFO][6360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-88' Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.455 [INFO][6360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.496 [INFO][6360] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.523 [INFO][6360] ipam/ipam.go 489: Trying affinity for 192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.539 [INFO][6360] ipam/ipam.go 155: Attempting to load block cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.545 [INFO][6360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.81.0/26 host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.545 [INFO][6360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.81.0/26 handle="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.549 [INFO][6360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736 Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.560 [INFO][6360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.81.0/26 handle="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.577 [INFO][6360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.81.7/26] block=192.168.81.0/26 handle="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.578 [INFO][6360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.81.7/26] handle="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" host="ip-172-31-19-88" Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.578 [INFO][6360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:00.677484 containerd[2044]: 2024-12-13 01:57:00.578 [INFO][6360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.7/26] IPv6=[] ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" HandleID="k8s-pod-network.8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.680215 containerd[2044]: 2024-12-13 01:57:00.582 [INFO][6350] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0", GenerateName:"calico-kube-controllers-5b5fbbd4b-", Namespace:"calico-system", SelfLink:"", UID:"a089a448-266f-45f0-83ae-3b0d411c1886", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b5fbbd4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"", Pod:"calico-kube-controllers-5b5fbbd4b-vhkl2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibed28f5f2b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:00.680215 containerd[2044]: 2024-12-13 01:57:00.582 [INFO][6350] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.81.7/32] ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.680215 containerd[2044]: 2024-12-13 01:57:00.582 [INFO][6350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibed28f5f2b9 ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.680215 containerd[2044]: 2024-12-13 01:57:00.597 [INFO][6350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.680215 containerd[2044]: 2024-12-13 01:57:00.599 [INFO][6350] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0", GenerateName:"calico-kube-controllers-5b5fbbd4b-", Namespace:"calico-system", SelfLink:"", UID:"a089a448-266f-45f0-83ae-3b0d411c1886", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 56, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b5fbbd4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-88", ContainerID:"8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736", Pod:"calico-kube-controllers-5b5fbbd4b-vhkl2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibed28f5f2b9", MAC:"f2:35:0d:75:5e:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:57:00.680215 containerd[2044]: 2024-12-13 01:57:00.672 [INFO][6350] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736" Namespace="calico-system" Pod="calico-kube-controllers-5b5fbbd4b-vhkl2" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--5b5fbbd4b--vhkl2-eth0" Dec 13 01:57:00.744125 containerd[2044]: time="2024-12-13T01:57:00.743914734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:00.745265 containerd[2044]: time="2024-12-13T01:57:00.745161354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:00.745511 containerd[2044]: time="2024-12-13T01:57:00.745288206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:00.746094 containerd[2044]: time="2024-12-13T01:57:00.745767858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:00.843219 systemd[1]: Started cri-containerd-8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736.scope - libcontainer container 8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736. Dec 13 01:57:00.991661 kubelet[3281]: I1213 01:57:00.989745 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd116f0-a6fd-4513-9f18-4fa2b846559e" path="/var/lib/kubelet/pods/1fd116f0-a6fd-4513-9f18-4fa2b846559e/volumes" Dec 13 01:57:01.006167 containerd[2044]: time="2024-12-13T01:57:01.006042616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b5fbbd4b-vhkl2,Uid:a089a448-266f-45f0-83ae-3b0d411c1886,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736\"" Dec 13 01:57:01.033872 containerd[2044]: time="2024-12-13T01:57:01.033762412Z" level=info msg="CreateContainer within sandbox \"8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:57:01.079703 containerd[2044]: time="2024-12-13T01:57:01.077561920Z" level=info msg="CreateContainer within sandbox \"8a846c9b34997062be36e44c3e57fb3a76cf4594b3d1713dce560fa50fb6e736\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502\"" Dec 13 01:57:01.081123 containerd[2044]: time="2024-12-13T01:57:01.080832868Z" level=info msg="StartContainer for \"c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502\"" Dec 13 01:57:01.182703 systemd[1]: run-containerd-runc-k8s.io-c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502-runc.wyXfFs.mount: Deactivated successfully. Dec 13 01:57:01.197460 systemd[1]: Started cri-containerd-c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502.scope - libcontainer container c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502. Dec 13 01:57:01.858920 systemd-networkd[1926]: calibed28f5f2b9: Gained IPv6LL Dec 13 01:57:01.950385 containerd[2044]: time="2024-12-13T01:57:01.949859444Z" level=info msg="StartContainer for \"c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502\" returns successfully" Dec 13 01:57:02.209701 systemd[1]: cri-containerd-4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461.scope: Deactivated successfully. Dec 13 01:57:02.294365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461-rootfs.mount: Deactivated successfully. Dec 13 01:57:02.308389 containerd[2044]: time="2024-12-13T01:57:02.308143266Z" level=info msg="shim disconnected" id=4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461 namespace=k8s.io Dec 13 01:57:02.308896 containerd[2044]: time="2024-12-13T01:57:02.308683086Z" level=warning msg="cleaning up after shim disconnected" id=4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461 namespace=k8s.io Dec 13 01:57:02.308896 containerd[2044]: time="2024-12-13T01:57:02.308752098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:02.390883 containerd[2044]: time="2024-12-13T01:57:02.390811662Z" level=info msg="StopContainer for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" returns successfully" Dec 13 01:57:02.393092 containerd[2044]: time="2024-12-13T01:57:02.391585782Z" level=info msg="StopPodSandbox for \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\"" Dec 13 01:57:02.393092 containerd[2044]: time="2024-12-13T01:57:02.391750398Z" level=info msg="Container to stop \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:02.404472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8-shm.mount: Deactivated successfully. Dec 13 01:57:02.427811 systemd[1]: cri-containerd-cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8.scope: Deactivated successfully. Dec 13 01:57:02.531701 containerd[2044]: time="2024-12-13T01:57:02.528123811Z" level=info msg="shim disconnected" id=cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8 namespace=k8s.io Dec 13 01:57:02.531701 containerd[2044]: time="2024-12-13T01:57:02.528208447Z" level=warning msg="cleaning up after shim disconnected" id=cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8 namespace=k8s.io Dec 13 01:57:02.531701 containerd[2044]: time="2024-12-13T01:57:02.528233263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:02.540153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8-rootfs.mount: Deactivated successfully. Dec 13 01:57:02.604514 containerd[2044]: time="2024-12-13T01:57:02.599564503Z" level=info msg="TearDown network for sandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" successfully" Dec 13 01:57:02.604514 containerd[2044]: time="2024-12-13T01:57:02.599661547Z" level=info msg="StopPodSandbox for \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" returns successfully" Dec 13 01:57:02.702710 kubelet[3281]: I1213 01:57:02.701325 3281 scope.go:117] "RemoveContainer" containerID="4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461" Dec 13 01:57:02.714185 containerd[2044]: time="2024-12-13T01:57:02.714086840Z" level=info msg="RemoveContainer for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\"" Dec 13 01:57:02.724455 kubelet[3281]: I1213 01:57:02.722786 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b5fbbd4b-vhkl2" podStartSLOduration=3.7227569000000003 podStartE2EDuration="3.7227569s" podCreationTimestamp="2024-12-13 01:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:02.720841364 +0000 UTC m=+79.990896542" watchObservedRunningTime="2024-12-13 01:57:02.7227569 +0000 UTC m=+79.992812222" Dec 13 01:57:02.730700 containerd[2044]: time="2024-12-13T01:57:02.728621684Z" level=info msg="RemoveContainer for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" returns successfully" Dec 13 01:57:02.731658 kubelet[3281]: I1213 01:57:02.731457 3281 scope.go:117] "RemoveContainer" containerID="4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461" Dec 13 01:57:02.733135 containerd[2044]: time="2024-12-13T01:57:02.733044176Z" level=error msg="ContainerStatus for \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\": not found" Dec 13 01:57:02.733581 kubelet[3281]: E1213 01:57:02.733453 3281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\": not found" containerID="4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461" Dec 13 01:57:02.733581 kubelet[3281]: I1213 01:57:02.733520 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461"} err="failed to get container status \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ab4509fb3759ef66e6359916c8b3a15b74c2a9d5afa03784c92385b164fe461\": not found" Dec 13 01:57:02.742357 kubelet[3281]: I1213 01:57:02.740861 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-typha-certs\") pod \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\" (UID: \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\") " Dec 13 01:57:02.742357 kubelet[3281]: I1213 01:57:02.740944 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m849t\" (UniqueName: \"kubernetes.io/projected/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-kube-api-access-m849t\") pod \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\" (UID: \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\") " Dec 13 01:57:02.742357 kubelet[3281]: I1213 01:57:02.740999 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-tigera-ca-bundle\") pod \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\" (UID: \"a3d1eb13-2428-4586-93d9-9fa7d23cd9e2\") " Dec 13 01:57:02.777781 kubelet[3281]: I1213 01:57:02.776904 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-kube-api-access-m849t" (OuterVolumeSpecName: "kube-api-access-m849t") pod "a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" (UID: "a3d1eb13-2428-4586-93d9-9fa7d23cd9e2"). InnerVolumeSpecName "kube-api-access-m849t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:57:02.779908 kubelet[3281]: I1213 01:57:02.779716 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" (UID: "a3d1eb13-2428-4586-93d9-9fa7d23cd9e2"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:57:02.779626 systemd[1]: var-lib-kubelet-pods-a3d1eb13\x2d2428\x2d4586\x2d93d9\x2d9fa7d23cd9e2-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Dec 13 01:57:02.790415 kubelet[3281]: I1213 01:57:02.781883 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" (UID: "a3d1eb13-2428-4586-93d9-9fa7d23cd9e2"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:57:02.782855 systemd[1]: var-lib-kubelet-pods-a3d1eb13\x2d2428\x2d4586\x2d93d9\x2d9fa7d23cd9e2-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Dec 13 01:57:02.801189 systemd[1]: run-containerd-runc-k8s.io-c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502-runc.pycDHB.mount: Deactivated successfully. Dec 13 01:57:02.801858 systemd[1]: var-lib-kubelet-pods-a3d1eb13\x2d2428\x2d4586\x2d93d9\x2d9fa7d23cd9e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm849t.mount: Deactivated successfully. Dec 13 01:57:02.842712 kubelet[3281]: I1213 01:57:02.841605 3281 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-typha-certs\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:02.842712 kubelet[3281]: I1213 01:57:02.841676 3281 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-m849t\" (UniqueName: \"kubernetes.io/projected/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-kube-api-access-m849t\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:02.842712 kubelet[3281]: I1213 01:57:02.841703 3281 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2-tigera-ca-bundle\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:02.926152 sshd[6203]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:02.943808 systemd[1]: sshd@17-172.31.19.88:22-139.178.68.195:53586.service: Deactivated successfully. Dec 13 01:57:02.943810 systemd-logind[2008]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:57:02.958592 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:57:02.959986 systemd[1]: session-18.scope: Consumed 1.114s CPU time. Dec 13 01:57:02.997168 systemd[1]: Started sshd@18-172.31.19.88:22-139.178.68.195:53600.service - OpenSSH per-connection server daemon (139.178.68.195:53600). Dec 13 01:57:03.004125 systemd-logind[2008]: Removed session 18. Dec 13 01:57:03.047030 systemd[1]: Removed slice kubepods-besteffort-poda3d1eb13_2428_4586_93d9_9fa7d23cd9e2.slice - libcontainer container kubepods-besteffort-poda3d1eb13_2428_4586_93d9_9fa7d23cd9e2.slice. Dec 13 01:57:03.210373 sshd[6568]: Accepted publickey for core from 139.178.68.195 port 53600 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:03.219181 sshd[6568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:03.234793 systemd-logind[2008]: New session 19 of user core. Dec 13 01:57:03.257108 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:57:04.028415 sshd[6568]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:04.041274 systemd[1]: sshd@18-172.31.19.88:22-139.178.68.195:53600.service: Deactivated successfully. Dec 13 01:57:04.047400 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:57:04.051600 systemd-logind[2008]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:57:04.083359 systemd[1]: Started sshd@19-172.31.19.88:22-139.178.68.195:53604.service - OpenSSH per-connection server daemon (139.178.68.195:53604). Dec 13 01:57:04.087563 systemd-logind[2008]: Removed session 19. Dec 13 01:57:04.286823 sshd[6609]: Accepted publickey for core from 139.178.68.195 port 53604 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:04.290511 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:04.305341 systemd-logind[2008]: New session 20 of user core. Dec 13 01:57:04.313976 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:57:04.652035 sshd[6609]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:04.659701 systemd[1]: sshd@19-172.31.19.88:22-139.178.68.195:53604.service: Deactivated successfully. Dec 13 01:57:04.665122 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:57:04.670874 systemd-logind[2008]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:57:04.677050 systemd-logind[2008]: Removed session 20. Dec 13 01:57:04.835915 ntpd[2000]: Listen normally on 16 calibed28f5f2b9 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 13 01:57:04.835996 ntpd[2000]: Deleting interface #14 calie6ff55a407b, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=27 secs Dec 13 01:57:04.836610 ntpd[2000]: 13 Dec 01:57:04 ntpd[2000]: Listen normally on 16 calibed28f5f2b9 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 13 01:57:04.836610 ntpd[2000]: 13 Dec 01:57:04 ntpd[2000]: Deleting interface #14 calie6ff55a407b, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=27 secs Dec 13 01:57:04.990289 kubelet[3281]: I1213 01:57:04.989430 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" path="/var/lib/kubelet/pods/a3d1eb13-2428-4586-93d9-9fa7d23cd9e2/volumes" Dec 13 01:57:09.696610 systemd[1]: Started sshd@20-172.31.19.88:22-139.178.68.195:59278.service - OpenSSH per-connection server daemon (139.178.68.195:59278). Dec 13 01:57:09.880710 sshd[6721]: Accepted publickey for core from 139.178.68.195 port 59278 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:09.886428 sshd[6721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:09.900711 systemd-logind[2008]: New session 21 of user core. Dec 13 01:57:09.910096 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:57:10.194139 sshd[6721]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:10.204721 systemd[1]: sshd@20-172.31.19.88:22-139.178.68.195:59278.service: Deactivated successfully. Dec 13 01:57:10.209974 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:57:10.211474 systemd-logind[2008]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:57:10.213988 systemd-logind[2008]: Removed session 21. Dec 13 01:57:15.235966 systemd[1]: Started sshd@21-172.31.19.88:22-139.178.68.195:59282.service - OpenSSH per-connection server daemon (139.178.68.195:59282). Dec 13 01:57:15.421121 sshd[6835]: Accepted publickey for core from 139.178.68.195 port 59282 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:15.425543 sshd[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:15.440140 systemd-logind[2008]: New session 22 of user core. Dec 13 01:57:15.446946 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:57:15.705284 sshd[6835]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:15.716354 systemd[1]: sshd@21-172.31.19.88:22-139.178.68.195:59282.service: Deactivated successfully. Dec 13 01:57:15.722959 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:57:15.724849 systemd-logind[2008]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:57:15.727851 systemd-logind[2008]: Removed session 22. Dec 13 01:57:20.755839 systemd[1]: Started sshd@22-172.31.19.88:22-139.178.68.195:33732.service - OpenSSH per-connection server daemon (139.178.68.195:33732). Dec 13 01:57:20.947117 sshd[6933]: Accepted publickey for core from 139.178.68.195 port 33732 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:20.950137 sshd[6933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:20.959348 systemd-logind[2008]: New session 23 of user core. Dec 13 01:57:20.966939 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:57:21.254244 sshd[6933]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:21.262614 systemd[1]: sshd@22-172.31.19.88:22-139.178.68.195:33732.service: Deactivated successfully. Dec 13 01:57:21.269395 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:57:21.273738 systemd-logind[2008]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:57:21.276338 systemd-logind[2008]: Removed session 23. Dec 13 01:57:26.296141 systemd[1]: Started sshd@23-172.31.19.88:22-139.178.68.195:52960.service - OpenSSH per-connection server daemon (139.178.68.195:52960). Dec 13 01:57:26.484630 sshd[7061]: Accepted publickey for core from 139.178.68.195 port 52960 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:26.487715 sshd[7061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:26.495750 systemd-logind[2008]: New session 24 of user core. Dec 13 01:57:26.504913 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:57:26.767035 sshd[7061]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:26.775369 systemd[1]: sshd@23-172.31.19.88:22-139.178.68.195:52960.service: Deactivated successfully. Dec 13 01:57:26.781337 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:57:26.787559 systemd-logind[2008]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:57:26.794406 systemd-logind[2008]: Removed session 24. Dec 13 01:57:30.185106 systemd[1]: run-containerd-runc-k8s.io-c063ffaa8d93b1d17c9dd9fc3399dc41fe2179fda6492d2e17342bf8e1b98502-runc.zqPMJg.mount: Deactivated successfully. Dec 13 01:57:30.589725 containerd[2044]: time="2024-12-13T01:57:30.589260130Z" level=info msg="StopContainer for \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\" with timeout 5 (s)" Dec 13 01:57:30.592123 containerd[2044]: time="2024-12-13T01:57:30.592061855Z" level=info msg="Stop container \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\" with signal terminated" Dec 13 01:57:30.632963 systemd[1]: cri-containerd-b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00.scope: Deactivated successfully. Dec 13 01:57:30.635121 systemd[1]: cri-containerd-b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00.scope: Consumed 12.012s CPU time. Dec 13 01:57:30.676508 containerd[2044]: time="2024-12-13T01:57:30.676394795Z" level=info msg="shim disconnected" id=b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00 namespace=k8s.io Dec 13 01:57:30.676508 containerd[2044]: time="2024-12-13T01:57:30.676480343Z" level=warning msg="cleaning up after shim disconnected" id=b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00 namespace=k8s.io Dec 13 01:57:30.676508 containerd[2044]: time="2024-12-13T01:57:30.676502183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:30.681942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00-rootfs.mount: Deactivated successfully. Dec 13 01:57:30.726489 containerd[2044]: time="2024-12-13T01:57:30.726425171Z" level=info msg="StopContainer for \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\" returns successfully" Dec 13 01:57:30.727470 containerd[2044]: time="2024-12-13T01:57:30.727318211Z" level=info msg="StopPodSandbox for \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\"" Dec 13 01:57:30.727470 containerd[2044]: time="2024-12-13T01:57:30.727384967Z" level=info msg="Container to stop \"4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:30.727470 containerd[2044]: time="2024-12-13T01:57:30.727413239Z" level=info msg="Container to stop \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:30.727470 containerd[2044]: time="2024-12-13T01:57:30.727444259Z" level=info msg="Container to stop \"e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:30.733541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4-shm.mount: Deactivated successfully. Dec 13 01:57:30.743008 systemd[1]: cri-containerd-139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4.scope: Deactivated successfully. Dec 13 01:57:30.789100 containerd[2044]: time="2024-12-13T01:57:30.788949011Z" level=info msg="shim disconnected" id=139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4 namespace=k8s.io Dec 13 01:57:30.789100 containerd[2044]: time="2024-12-13T01:57:30.789090251Z" level=warning msg="cleaning up after shim disconnected" id=139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4 namespace=k8s.io Dec 13 01:57:30.790252 containerd[2044]: time="2024-12-13T01:57:30.789121487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:30.831994 containerd[2044]: time="2024-12-13T01:57:30.831829032Z" level=info msg="TearDown network for sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" successfully" Dec 13 01:57:30.831994 containerd[2044]: time="2024-12-13T01:57:30.831897720Z" level=info msg="StopPodSandbox for \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" returns successfully" Dec 13 01:57:30.906208 kubelet[3281]: I1213 01:57:30.904952 3281 topology_manager.go:215] "Topology Admit Handler" podUID="204aa044-0ed6-48b7-abe3-f0daee877246" podNamespace="calico-system" podName="calico-node-kjsl4" Dec 13 01:57:30.906208 kubelet[3281]: E1213 01:57:30.905060 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46af5ed3-04d4-4283-aa3b-cd658fdc701a" containerName="install-cni" Dec 13 01:57:30.906208 kubelet[3281]: E1213 01:57:30.905081 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46af5ed3-04d4-4283-aa3b-cd658fdc701a" containerName="calico-node" Dec 13 01:57:30.906208 kubelet[3281]: E1213 01:57:30.905101 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" containerName="calico-typha" Dec 13 01:57:30.906208 kubelet[3281]: E1213 01:57:30.905119 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46af5ed3-04d4-4283-aa3b-cd658fdc701a" containerName="flexvol-driver" Dec 13 01:57:30.906208 kubelet[3281]: I1213 01:57:30.905170 3281 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d1eb13-2428-4586-93d9-9fa7d23cd9e2" containerName="calico-typha" Dec 13 01:57:30.906208 kubelet[3281]: I1213 01:57:30.905188 3281 memory_manager.go:354] "RemoveStaleState removing state" podUID="46af5ed3-04d4-4283-aa3b-cd658fdc701a" containerName="calico-node" Dec 13 01:57:30.932069 systemd[1]: Created slice kubepods-besteffort-pod204aa044_0ed6_48b7_abe3_f0daee877246.slice - libcontainer container kubepods-besteffort-pod204aa044_0ed6_48b7_abe3_f0daee877246.slice. Dec 13 01:57:30.945678 kubelet[3281]: I1213 01:57:30.944765 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-log-dir\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.945678 kubelet[3281]: I1213 01:57:30.944839 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8wf\" (UniqueName: \"kubernetes.io/projected/46af5ed3-04d4-4283-aa3b-cd658fdc701a-kube-api-access-vd8wf\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.945678 kubelet[3281]: I1213 01:57:30.944875 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-bin-dir\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.945678 kubelet[3281]: I1213 01:57:30.944917 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46af5ed3-04d4-4283-aa3b-cd658fdc701a-tigera-ca-bundle\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.945678 kubelet[3281]: I1213 01:57:30.944951 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-lib-calico\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.945678 kubelet[3281]: I1213 01:57:30.944991 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46af5ed3-04d4-4283-aa3b-cd658fdc701a-node-certs\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946118 kubelet[3281]: I1213 01:57:30.945027 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-policysync\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946118 kubelet[3281]: I1213 01:57:30.945064 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-run-calico\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946118 kubelet[3281]: I1213 01:57:30.945098 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-flexvol-driver-host\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946118 kubelet[3281]: I1213 01:57:30.945129 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-lib-modules\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946118 kubelet[3281]: I1213 01:57:30.945165 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-xtables-lock\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946118 kubelet[3281]: I1213 01:57:30.945198 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-net-dir\") pod \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\" (UID: \"46af5ed3-04d4-4283-aa3b-cd658fdc701a\") " Dec 13 01:57:30.946465 kubelet[3281]: I1213 01:57:30.945311 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.946465 kubelet[3281]: I1213 01:57:30.945373 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.946465 kubelet[3281]: I1213 01:57:30.945909 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-policysync" (OuterVolumeSpecName: "policysync") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.946465 kubelet[3281]: I1213 01:57:30.945986 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.949047 kubelet[3281]: I1213 01:57:30.948963 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.949047 kubelet[3281]: I1213 01:57:30.949056 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.949337 kubelet[3281]: I1213 01:57:30.949106 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.949337 kubelet[3281]: I1213 01:57:30.949152 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.949337 kubelet[3281]: I1213 01:57:30.949199 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:30.953765 kubelet[3281]: I1213 01:57:30.953382 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46af5ed3-04d4-4283-aa3b-cd658fdc701a-kube-api-access-vd8wf" (OuterVolumeSpecName: "kube-api-access-vd8wf") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "kube-api-access-vd8wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:57:30.960691 kubelet[3281]: I1213 01:57:30.960089 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46af5ed3-04d4-4283-aa3b-cd658fdc701a-node-certs" (OuterVolumeSpecName: "node-certs") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:57:30.962018 kubelet[3281]: I1213 01:57:30.961738 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46af5ed3-04d4-4283-aa3b-cd658fdc701a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "46af5ed3-04d4-4283-aa3b-cd658fdc701a" (UID: "46af5ed3-04d4-4283-aa3b-cd658fdc701a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:57:30.995865 systemd[1]: Removed slice kubepods-besteffort-pod46af5ed3_04d4_4283_aa3b_cd658fdc701a.slice - libcontainer container kubepods-besteffort-pod46af5ed3_04d4_4283_aa3b_cd658fdc701a.slice. Dec 13 01:57:30.996098 systemd[1]: kubepods-besteffort-pod46af5ed3_04d4_4283_aa3b_cd658fdc701a.slice: Consumed 12.946s CPU time. Dec 13 01:57:31.045875 kubelet[3281]: I1213 01:57:31.045809 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cvd\" (UniqueName: \"kubernetes.io/projected/204aa044-0ed6-48b7-abe3-f0daee877246-kube-api-access-f5cvd\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046087 kubelet[3281]: I1213 01:57:31.045900 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-cni-net-dir\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046087 kubelet[3281]: I1213 01:57:31.046000 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-cni-log-dir\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046207 kubelet[3281]: I1213 01:57:31.046102 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-policysync\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046362 kubelet[3281]: I1213 01:57:31.046252 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/204aa044-0ed6-48b7-abe3-f0daee877246-node-certs\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046362 kubelet[3281]: I1213 01:57:31.046328 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-lib-modules\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046471 kubelet[3281]: I1213 01:57:31.046411 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204aa044-0ed6-48b7-abe3-f0daee877246-tigera-ca-bundle\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046533 kubelet[3281]: I1213 01:57:31.046453 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-flexvol-driver-host\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046628 kubelet[3281]: I1213 01:57:31.046540 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-cni-bin-dir\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046730 kubelet[3281]: I1213 01:57:31.046681 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-var-lib-calico\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046793 kubelet[3281]: I1213 01:57:31.046722 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-var-run-calico\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.046896 kubelet[3281]: I1213 01:57:31.046803 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/204aa044-0ed6-48b7-abe3-f0daee877246-xtables-lock\") pod \"calico-node-kjsl4\" (UID: \"204aa044-0ed6-48b7-abe3-f0daee877246\") " pod="calico-system/calico-node-kjsl4" Dec 13 01:57:31.047073 kubelet[3281]: I1213 01:57:31.047031 3281 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-run-calico\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047073 kubelet[3281]: I1213 01:57:31.047100 3281 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-log-dir\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047235 kubelet[3281]: I1213 01:57:31.047125 3281 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46af5ed3-04d4-4283-aa3b-cd658fdc701a-tigera-ca-bundle\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047235 kubelet[3281]: I1213 01:57:31.047146 3281 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-var-lib-calico\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047235 kubelet[3281]: I1213 01:57:31.047194 3281 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-policysync\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047235 kubelet[3281]: I1213 01:57:31.047215 3281 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-flexvol-driver-host\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047444 kubelet[3281]: I1213 01:57:31.047236 3281 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-lib-modules\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047444 kubelet[3281]: I1213 01:57:31.047280 3281 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-xtables-lock\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047444 kubelet[3281]: I1213 01:57:31.047300 3281 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-net-dir\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047444 kubelet[3281]: I1213 01:57:31.047320 3281 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vd8wf\" (UniqueName: \"kubernetes.io/projected/46af5ed3-04d4-4283-aa3b-cd658fdc701a-kube-api-access-vd8wf\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047444 kubelet[3281]: I1213 01:57:31.047367 3281 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46af5ed3-04d4-4283-aa3b-cd658fdc701a-cni-bin-dir\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.047444 kubelet[3281]: I1213 01:57:31.047390 3281 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46af5ed3-04d4-4283-aa3b-cd658fdc701a-node-certs\") on node \"ip-172-31-19-88\" DevicePath \"\"" Dec 13 01:57:31.175577 systemd[1]: var-lib-kubelet-pods-46af5ed3\x2d04d4\x2d4283\x2daa3b\x2dcd658fdc701a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 01:57:31.175797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4-rootfs.mount: Deactivated successfully. Dec 13 01:57:31.175938 systemd[1]: var-lib-kubelet-pods-46af5ed3\x2d04d4\x2d4283\x2daa3b\x2dcd658fdc701a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvd8wf.mount: Deactivated successfully. Dec 13 01:57:31.176071 systemd[1]: var-lib-kubelet-pods-46af5ed3\x2d04d4\x2d4283\x2daa3b\x2dcd658fdc701a-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 01:57:31.239688 containerd[2044]: time="2024-12-13T01:57:31.239590618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kjsl4,Uid:204aa044-0ed6-48b7-abe3-f0daee877246,Namespace:calico-system,Attempt:0,}" Dec 13 01:57:31.286898 containerd[2044]: time="2024-12-13T01:57:31.285699454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:31.286898 containerd[2044]: time="2024-12-13T01:57:31.285795874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:31.286898 containerd[2044]: time="2024-12-13T01:57:31.285832390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:31.286898 containerd[2044]: time="2024-12-13T01:57:31.285992686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:31.333969 systemd[1]: Started cri-containerd-427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e.scope - libcontainer container 427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e. Dec 13 01:57:31.375506 containerd[2044]: time="2024-12-13T01:57:31.375440818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kjsl4,Uid:204aa044-0ed6-48b7-abe3-f0daee877246,Namespace:calico-system,Attempt:0,} returns sandbox id \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\"" Dec 13 01:57:31.389457 containerd[2044]: time="2024-12-13T01:57:31.389203822Z" level=info msg="CreateContainer within sandbox \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:57:31.417904 containerd[2044]: time="2024-12-13T01:57:31.417825731Z" level=info msg="CreateContainer within sandbox \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179\"" Dec 13 01:57:31.419675 containerd[2044]: time="2024-12-13T01:57:31.419583287Z" level=info msg="StartContainer for \"13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179\"" Dec 13 01:57:31.474979 systemd[1]: Started cri-containerd-13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179.scope - libcontainer container 13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179. Dec 13 01:57:31.534489 containerd[2044]: time="2024-12-13T01:57:31.534418451Z" level=info msg="StartContainer for \"13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179\" returns successfully" Dec 13 01:57:31.586602 systemd[1]: cri-containerd-13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179.scope: Deactivated successfully. Dec 13 01:57:31.654744 containerd[2044]: time="2024-12-13T01:57:31.654617976Z" level=info msg="shim disconnected" id=13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179 namespace=k8s.io Dec 13 01:57:31.654744 containerd[2044]: time="2024-12-13T01:57:31.654727992Z" level=warning msg="cleaning up after shim disconnected" id=13bd2253ec7f0ee9f2787659005d4016f207bd6516670624ae4e4bbdee460179 namespace=k8s.io Dec 13 01:57:31.655620 containerd[2044]: time="2024-12-13T01:57:31.654749784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:31.806415 systemd[1]: Started sshd@24-172.31.19.88:22-139.178.68.195:52962.service - OpenSSH per-connection server daemon (139.178.68.195:52962). Dec 13 01:57:31.842721 containerd[2044]: time="2024-12-13T01:57:31.842417125Z" level=info msg="CreateContainer within sandbox \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:57:31.849895 kubelet[3281]: I1213 01:57:31.849820 3281 scope.go:117] "RemoveContainer" containerID="b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00" Dec 13 01:57:31.862350 containerd[2044]: time="2024-12-13T01:57:31.861743449Z" level=info msg="RemoveContainer for \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\"" Dec 13 01:57:31.874055 containerd[2044]: time="2024-12-13T01:57:31.873993685Z" level=info msg="RemoveContainer for \"b4aa59b54653794721d5c341a4d7dfa20de39cf10baa5f49f5ce473f401f3f00\" returns successfully" Dec 13 01:57:31.874364 kubelet[3281]: I1213 01:57:31.874322 3281 scope.go:117] "RemoveContainer" containerID="4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5" Dec 13 01:57:31.878379 containerd[2044]: time="2024-12-13T01:57:31.878307793Z" level=info msg="RemoveContainer for \"4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5\"" Dec 13 01:57:31.887489 containerd[2044]: time="2024-12-13T01:57:31.887355253Z" level=info msg="CreateContainer within sandbox \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7\"" Dec 13 01:57:31.899097 containerd[2044]: time="2024-12-13T01:57:31.899031541Z" level=info msg="StartContainer for \"1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7\"" Dec 13 01:57:31.920973 containerd[2044]: time="2024-12-13T01:57:31.920867977Z" level=info msg="RemoveContainer for \"4623191f08ad67dca21c5883999e8b86ed2578382f8ec2ecc16ba1c9a2c950b5\" returns successfully" Dec 13 01:57:31.925189 kubelet[3281]: I1213 01:57:31.925150 3281 scope.go:117] "RemoveContainer" containerID="e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4" Dec 13 01:57:31.936877 containerd[2044]: time="2024-12-13T01:57:31.936808405Z" level=info msg="RemoveContainer for \"e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4\"" Dec 13 01:57:31.979113 containerd[2044]: time="2024-12-13T01:57:31.978957985Z" level=info msg="RemoveContainer for \"e2593db3e5e2c9584d4fde8a469f30bfc05cb0b7a261a9142b58854726ef32c4\" returns successfully" Dec 13 01:57:32.007535 sshd[7357]: Accepted publickey for core from 139.178.68.195 port 52962 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:32.011874 sshd[7357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:32.030102 systemd-logind[2008]: New session 25 of user core. Dec 13 01:57:32.044044 systemd[1]: Started cri-containerd-1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7.scope - libcontainer container 1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7. Dec 13 01:57:32.052999 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:57:32.112271 containerd[2044]: time="2024-12-13T01:57:32.111887554Z" level=info msg="StartContainer for \"1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7\" returns successfully" Dec 13 01:57:32.351192 sshd[7357]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:32.362689 systemd[1]: sshd@24-172.31.19.88:22-139.178.68.195:52962.service: Deactivated successfully. Dec 13 01:57:32.371091 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:57:32.377396 systemd-logind[2008]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:57:32.381841 systemd-logind[2008]: Removed session 25. Dec 13 01:57:32.987013 kubelet[3281]: I1213 01:57:32.986955 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46af5ed3-04d4-4283-aa3b-cd658fdc701a" path="/var/lib/kubelet/pods/46af5ed3-04d4-4283-aa3b-cd658fdc701a/volumes" Dec 13 01:57:33.990795 systemd[1]: cri-containerd-1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7.scope: Deactivated successfully. Dec 13 01:57:33.991247 systemd[1]: cri-containerd-1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7.scope: Consumed 1.236s CPU time. Dec 13 01:57:34.036858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7-rootfs.mount: Deactivated successfully. Dec 13 01:57:34.053936 containerd[2044]: time="2024-12-13T01:57:34.053823156Z" level=info msg="shim disconnected" id=1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7 namespace=k8s.io Dec 13 01:57:34.053936 containerd[2044]: time="2024-12-13T01:57:34.053904888Z" level=warning msg="cleaning up after shim disconnected" id=1ce9a826cb9580bc57b17e2510df54ddbba49c77de36180a398e185925c64ce7 namespace=k8s.io Dec 13 01:57:34.053936 containerd[2044]: time="2024-12-13T01:57:34.053946516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:34.903680 containerd[2044]: time="2024-12-13T01:57:34.901481332Z" level=info msg="CreateContainer within sandbox \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:57:34.948585 containerd[2044]: time="2024-12-13T01:57:34.946803124Z" level=info msg="CreateContainer within sandbox \"427195e8ffdedea089756eaf825cf90e81da990b758cc9539be266aef1716d0e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5ade3706686d1053525c094c31f683ba1f197f2cbd25b9b92479def88a61d071\"" Dec 13 01:57:34.956245 containerd[2044]: time="2024-12-13T01:57:34.955744036Z" level=info msg="StartContainer for \"5ade3706686d1053525c094c31f683ba1f197f2cbd25b9b92479def88a61d071\"" Dec 13 01:57:34.958806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630957647.mount: Deactivated successfully. Dec 13 01:57:35.016999 systemd[1]: Started cri-containerd-5ade3706686d1053525c094c31f683ba1f197f2cbd25b9b92479def88a61d071.scope - libcontainer container 5ade3706686d1053525c094c31f683ba1f197f2cbd25b9b92479def88a61d071. Dec 13 01:57:35.092548 containerd[2044]: time="2024-12-13T01:57:35.092491489Z" level=info msg="StartContainer for \"5ade3706686d1053525c094c31f683ba1f197f2cbd25b9b92479def88a61d071\" returns successfully" Dec 13 01:57:35.949496 kubelet[3281]: I1213 01:57:35.948916 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kjsl4" podStartSLOduration=5.948889553 podStartE2EDuration="5.948889553s" podCreationTimestamp="2024-12-13 01:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:35.946236545 +0000 UTC m=+113.216291735" watchObservedRunningTime="2024-12-13 01:57:35.948889553 +0000 UTC m=+113.218944839" Dec 13 01:57:37.400324 systemd[1]: Started sshd@25-172.31.19.88:22-139.178.68.195:36694.service - OpenSSH per-connection server daemon (139.178.68.195:36694). Dec 13 01:57:37.604664 sshd[7633]: Accepted publickey for core from 139.178.68.195 port 36694 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:37.608994 sshd[7633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:37.629729 systemd-logind[2008]: New session 26 of user core. Dec 13 01:57:37.643555 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:57:37.974943 sshd[7633]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:37.988231 systemd[1]: sshd@25-172.31.19.88:22-139.178.68.195:36694.service: Deactivated successfully. Dec 13 01:57:38.000850 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:57:38.007553 systemd-logind[2008]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:57:38.013265 systemd-logind[2008]: Removed session 26. Dec 13 01:57:38.400128 (udev-worker)[7705]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:57:38.401283 (udev-worker)[7706]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:57:43.015193 systemd[1]: Started sshd@26-172.31.19.88:22-139.178.68.195:36710.service - OpenSSH per-connection server daemon (139.178.68.195:36710). Dec 13 01:57:43.209861 sshd[7746]: Accepted publickey for core from 139.178.68.195 port 36710 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:43.217598 sshd[7746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:43.242944 systemd-logind[2008]: New session 27 of user core. Dec 13 01:57:43.248978 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:57:43.525023 sshd[7746]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:43.531143 systemd[1]: sshd@26-172.31.19.88:22-139.178.68.195:36710.service: Deactivated successfully. Dec 13 01:57:43.535585 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:57:43.537192 systemd-logind[2008]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:57:43.541590 systemd-logind[2008]: Removed session 27. Dec 13 01:57:44.844536 kubelet[3281]: I1213 01:57:44.844410 3281 scope.go:117] "RemoveContainer" containerID="b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9" Dec 13 01:57:44.847455 containerd[2044]: time="2024-12-13T01:57:44.847386673Z" level=info msg="RemoveContainer for \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\"" Dec 13 01:57:44.853980 containerd[2044]: time="2024-12-13T01:57:44.853923565Z" level=info msg="RemoveContainer for \"b31c5061d67f1b8bb98948271fe7d0156535a9f602dbcb6fefb56d80b94373d9\" returns successfully" Dec 13 01:57:44.857148 containerd[2044]: time="2024-12-13T01:57:44.856849621Z" level=info msg="StopPodSandbox for \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\"" Dec 13 01:57:44.857148 containerd[2044]: time="2024-12-13T01:57:44.856992421Z" level=info msg="TearDown network for sandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" successfully" Dec 13 01:57:44.857148 containerd[2044]: time="2024-12-13T01:57:44.857018617Z" level=info msg="StopPodSandbox for \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" returns successfully" Dec 13 01:57:44.858141 containerd[2044]: time="2024-12-13T01:57:44.857868961Z" level=info msg="RemovePodSandbox for \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\"" Dec 13 01:57:44.858141 containerd[2044]: time="2024-12-13T01:57:44.857995165Z" level=info msg="Forcibly stopping sandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\"" Dec 13 01:57:44.858141 containerd[2044]: time="2024-12-13T01:57:44.858097417Z" level=info msg="TearDown network for sandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" successfully" Dec 13 01:57:44.869192 containerd[2044]: time="2024-12-13T01:57:44.868847845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:44.869192 containerd[2044]: time="2024-12-13T01:57:44.869010973Z" level=info msg="RemovePodSandbox \"cf32c3c9a352c53cd24f0b225fe108d265130f8ad9ed2911920472360e9863d8\" returns successfully" Dec 13 01:57:44.870338 containerd[2044]: time="2024-12-13T01:57:44.870049333Z" level=info msg="StopPodSandbox for \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\"" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:44.961 [WARNING][7780] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:44.961 [INFO][7780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:44.961 [INFO][7780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" iface="eth0" netns="" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:44.961 [INFO][7780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:44.961 [INFO][7780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.003 [INFO][7786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.003 [INFO][7786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.003 [INFO][7786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.017 [WARNING][7786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.017 [INFO][7786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.021 [INFO][7786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:45.026526 containerd[2044]: 2024-12-13 01:57:45.023 [INFO][7780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.026526 containerd[2044]: time="2024-12-13T01:57:45.026463058Z" level=info msg="TearDown network for sandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" successfully" Dec 13 01:57:45.028872 containerd[2044]: time="2024-12-13T01:57:45.026684566Z" level=info msg="StopPodSandbox for \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" returns successfully" Dec 13 01:57:45.028872 containerd[2044]: time="2024-12-13T01:57:45.028624138Z" level=info msg="RemovePodSandbox for \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\"" Dec 13 01:57:45.028872 containerd[2044]: time="2024-12-13T01:57:45.028729426Z" level=info msg="Forcibly stopping sandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\"" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.096 [WARNING][7804] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" WorkloadEndpoint="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.096 [INFO][7804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.096 [INFO][7804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" iface="eth0" netns="" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.096 [INFO][7804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.096 [INFO][7804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.138 [INFO][7810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.138 [INFO][7810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.139 [INFO][7810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.153 [WARNING][7810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.153 [INFO][7810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" HandleID="k8s-pod-network.880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Workload="ip--172--31--19--88-k8s-calico--kube--controllers--56cc9599d8--srdrd-eth0" Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.157 [INFO][7810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:57:45.163275 containerd[2044]: 2024-12-13 01:57:45.160 [INFO][7804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6" Dec 13 01:57:45.163275 containerd[2044]: time="2024-12-13T01:57:45.163207127Z" level=info msg="TearDown network for sandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" successfully" Dec 13 01:57:45.170166 containerd[2044]: time="2024-12-13T01:57:45.170089931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:45.170338 containerd[2044]: time="2024-12-13T01:57:45.170193887Z" level=info msg="RemovePodSandbox \"880fb84ca6e471783f04c2ef0175ece1e5964a71c118e51b684cfa653a7235f6\" returns successfully" Dec 13 01:57:45.171074 containerd[2044]: time="2024-12-13T01:57:45.170848103Z" level=info msg="StopPodSandbox for \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\"" Dec 13 01:57:45.171074 containerd[2044]: time="2024-12-13T01:57:45.170993111Z" level=info msg="TearDown network for sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" successfully" Dec 13 01:57:45.171074 containerd[2044]: time="2024-12-13T01:57:45.171017843Z" level=info msg="StopPodSandbox for \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" returns successfully" Dec 13 01:57:45.171946 containerd[2044]: time="2024-12-13T01:57:45.171527339Z" level=info msg="RemovePodSandbox for \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\"" Dec 13 01:57:45.171946 containerd[2044]: time="2024-12-13T01:57:45.171571007Z" level=info msg="Forcibly stopping sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\"" Dec 13 01:57:45.171946 containerd[2044]: time="2024-12-13T01:57:45.171702611Z" level=info msg="TearDown network for sandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" successfully" Dec 13 01:57:45.178263 containerd[2044]: time="2024-12-13T01:57:45.178190207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:45.178415 containerd[2044]: time="2024-12-13T01:57:45.178295699Z" level=info msg="RemovePodSandbox \"139561d146d42dc753c3912d72090ee16b198a4885a2259e954e30062bc7abf4\" returns successfully" Dec 13 01:57:57.425402 systemd[1]: cri-containerd-117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f.scope: Deactivated successfully. Dec 13 01:57:57.429531 systemd[1]: cri-containerd-117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f.scope: Consumed 9.948s CPU time. Dec 13 01:57:57.472909 containerd[2044]: time="2024-12-13T01:57:57.470058372Z" level=info msg="shim disconnected" id=117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f namespace=k8s.io Dec 13 01:57:57.472909 containerd[2044]: time="2024-12-13T01:57:57.470137080Z" level=warning msg="cleaning up after shim disconnected" id=117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f namespace=k8s.io Dec 13 01:57:57.472909 containerd[2044]: time="2024-12-13T01:57:57.470158776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:57.481086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f-rootfs.mount: Deactivated successfully. Dec 13 01:57:57.914370 systemd[1]: cri-containerd-77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc.scope: Deactivated successfully. Dec 13 01:57:57.916181 systemd[1]: cri-containerd-77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc.scope: Consumed 5.717s CPU time, 21.9M memory peak, 0B memory swap peak. Dec 13 01:57:57.956411 kubelet[3281]: I1213 01:57:57.956370 3281 scope.go:117] "RemoveContainer" containerID="117caf2ea75f44a7a61b997c99527647fd821dea43988f0f34a4de909489178f" Dec 13 01:57:57.964389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc-rootfs.mount: Deactivated successfully. Dec 13 01:57:57.966578 containerd[2044]: time="2024-12-13T01:57:57.966343250Z" level=info msg="shim disconnected" id=77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc namespace=k8s.io Dec 13 01:57:57.966578 containerd[2044]: time="2024-12-13T01:57:57.966423014Z" level=warning msg="cleaning up after shim disconnected" id=77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc namespace=k8s.io Dec 13 01:57:57.966578 containerd[2044]: time="2024-12-13T01:57:57.966444710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:57.971281 containerd[2044]: time="2024-12-13T01:57:57.970950183Z" level=info msg="CreateContainer within sandbox \"04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 01:57:58.004380 containerd[2044]: time="2024-12-13T01:57:58.004281887Z" level=info msg="CreateContainer within sandbox \"04248e3e57963f0c7de9534ce093816bbcbfb55dbb1fba87398d5431f50c4648\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5c503c4f0f712c203c7fbc1d12c6588e9cc47db92d673fb8e01c9a9c8df97eb5\"" Dec 13 01:57:58.006680 containerd[2044]: time="2024-12-13T01:57:58.005755091Z" level=info msg="StartContainer for \"5c503c4f0f712c203c7fbc1d12c6588e9cc47db92d673fb8e01c9a9c8df97eb5\"" Dec 13 01:57:58.009906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012866031.mount: Deactivated successfully. Dec 13 01:57:58.066978 systemd[1]: Started cri-containerd-5c503c4f0f712c203c7fbc1d12c6588e9cc47db92d673fb8e01c9a9c8df97eb5.scope - libcontainer container 5c503c4f0f712c203c7fbc1d12c6588e9cc47db92d673fb8e01c9a9c8df97eb5. Dec 13 01:57:58.121589 containerd[2044]: time="2024-12-13T01:57:58.121368647Z" level=info msg="StartContainer for \"5c503c4f0f712c203c7fbc1d12c6588e9cc47db92d673fb8e01c9a9c8df97eb5\" returns successfully" Dec 13 01:57:58.962419 kubelet[3281]: I1213 01:57:58.962354 3281 scope.go:117] "RemoveContainer" containerID="77b8a3e3cbd62fdfc1ea26f002d73a5b5d234c9ec819549e2cee9e41c1f9c4bc" Dec 13 01:57:58.968546 containerd[2044]: time="2024-12-13T01:57:58.968479827Z" level=info msg="CreateContainer within sandbox \"f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:57:58.997182 containerd[2044]: time="2024-12-13T01:57:58.996999832Z" level=info msg="CreateContainer within sandbox \"f44dc000d77c83321bd98da5d76d81b00175008e3645aeb99b88b791b692775a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"775976a61f6d17f95922c7e2fe554f86f31e5e88f691ac31f4f237c5dbc9539e\"" Dec 13 01:57:58.999069 containerd[2044]: time="2024-12-13T01:57:58.998888140Z" level=info msg="StartContainer for \"775976a61f6d17f95922c7e2fe554f86f31e5e88f691ac31f4f237c5dbc9539e\"" Dec 13 01:57:59.061979 systemd[1]: Started cri-containerd-775976a61f6d17f95922c7e2fe554f86f31e5e88f691ac31f4f237c5dbc9539e.scope - libcontainer container 775976a61f6d17f95922c7e2fe554f86f31e5e88f691ac31f4f237c5dbc9539e. Dec 13 01:57:59.136691 containerd[2044]: time="2024-12-13T01:57:59.134045268Z" level=info msg="StartContainer for \"775976a61f6d17f95922c7e2fe554f86f31e5e88f691ac31f4f237c5dbc9539e\" returns successfully" Dec 13 01:58:02.609885 systemd[1]: cri-containerd-6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61.scope: Deactivated successfully. Dec 13 01:58:02.610349 systemd[1]: cri-containerd-6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61.scope: Consumed 3.665s CPU time, 16.1M memory peak, 0B memory swap peak. Dec 13 01:58:02.671525 containerd[2044]: time="2024-12-13T01:58:02.669729462Z" level=info msg="shim disconnected" id=6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61 namespace=k8s.io Dec 13 01:58:02.671525 containerd[2044]: time="2024-12-13T01:58:02.671131122Z" level=warning msg="cleaning up after shim disconnected" id=6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61 namespace=k8s.io Dec 13 01:58:02.671525 containerd[2044]: time="2024-12-13T01:58:02.671159790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:58:02.671315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61-rootfs.mount: Deactivated successfully. Dec 13 01:58:02.992613 kubelet[3281]: I1213 01:58:02.992463 3281 scope.go:117] "RemoveContainer" containerID="6529e9d70d7581c6f65cd65725d90c0c2715f760cf227dc64c766e4d7af51e61" Dec 13 01:58:02.998265 containerd[2044]: time="2024-12-13T01:58:02.998039491Z" level=info msg="CreateContainer within sandbox \"f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:58:03.027907 containerd[2044]: time="2024-12-13T01:58:03.027826000Z" level=info msg="CreateContainer within sandbox \"f07d863cb07332b78fe6d29b243df06de8f68f591230a1b8697474eb14e1b987\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d6cb7ea65998aa86a02d4d1362033d3fa86f54dafbf76530e22090e1cf6d2d7a\"" Dec 13 01:58:03.029713 containerd[2044]: time="2024-12-13T01:58:03.028786288Z" level=info msg="StartContainer for \"d6cb7ea65998aa86a02d4d1362033d3fa86f54dafbf76530e22090e1cf6d2d7a\"" Dec 13 01:58:03.085992 systemd[1]: Started cri-containerd-d6cb7ea65998aa86a02d4d1362033d3fa86f54dafbf76530e22090e1cf6d2d7a.scope - libcontainer container d6cb7ea65998aa86a02d4d1362033d3fa86f54dafbf76530e22090e1cf6d2d7a. Dec 13 01:58:03.157843 containerd[2044]: time="2024-12-13T01:58:03.157748296Z" level=info msg="StartContainer for \"d6cb7ea65998aa86a02d4d1362033d3fa86f54dafbf76530e22090e1cf6d2d7a\" returns successfully" Dec 13 01:58:05.556750 kubelet[3281]: E1213 01:58:05.556623 3281 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-88?timeout=10s\": context deadline exceeded"