Jul 2 08:17:24.367328 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 08:17:24.367350 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:17:24.367358 kernel: KASLR enabled Jul 2 08:17:24.367366 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 08:17:24.367371 kernel: printk: bootconsole [pl11] enabled Jul 2 08:17:24.367377 kernel: efi: EFI v2.7 by EDK II Jul 2 08:17:24.367384 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jul 2 08:17:24.367390 kernel: random: crng init done Jul 2 08:17:24.367396 kernel: ACPI: Early table checksum verification disabled Jul 2 08:17:24.367402 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 08:17:24.367408 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367414 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367422 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 08:17:24.367428 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367435 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367442 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367448 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367456 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367463 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367469 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 08:17:24.367475 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367482 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 08:17:24.367488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 2 08:17:24.367494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 2 08:17:24.367501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 2 08:17:24.367507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 2 08:17:24.367513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 2 08:17:24.367520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 2 08:17:24.367527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 2 08:17:24.367534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 2 08:17:24.367540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 2 08:17:24.367547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 2 08:17:24.367553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 2 08:17:24.367559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 2 08:17:24.367565 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 2 08:17:24.367572 kernel: Zone ranges: Jul 2 08:17:24.367578 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 08:17:24.367584 kernel: DMA32 empty Jul 2 08:17:24.367590 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 08:17:24.367598 kernel: Movable zone start for each node Jul 2 08:17:24.367608 kernel: Early memory node ranges Jul 2 08:17:24.367614 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 08:17:24.367621 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 08:17:24.367628 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 08:17:24.367636 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 08:17:24.367643 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 08:17:24.367649 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 08:17:24.367656 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 08:17:24.367663 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 08:17:24.367669 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 08:17:24.367676 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 08:17:24.367683 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 08:17:24.367689 kernel: psci: probing for conduit method from ACPI. Jul 2 08:17:24.367696 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 08:17:24.367703 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:17:24.367709 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 08:17:24.367718 kernel: psci: SMC Calling Convention v1.4 Jul 2 08:17:24.367724 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 2 08:17:24.367731 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 2 08:17:24.367738 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:17:24.367755 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:17:24.367762 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:17:24.367769 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:17:24.367775 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:17:24.367782 kernel: CPU features: detected: Hardware dirty bit management Jul 2 08:17:24.367789 kernel: CPU features: detected: Spectre-BHB Jul 2 08:17:24.367796 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 08:17:24.367802 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 08:17:24.367811 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 08:17:24.367818 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 08:17:24.367825 kernel: alternatives: applying boot alternatives Jul 2 08:17:24.367832 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:17:24.367840 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:17:24.367846 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:17:24.367853 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:17:24.367860 kernel: Fallback order for Node 0: 0 Jul 2 08:17:24.367867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 08:17:24.367874 kernel: Policy zone: Normal Jul 2 08:17:24.367882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:17:24.367888 kernel: software IO TLB: area num 2. Jul 2 08:17:24.367895 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jul 2 08:17:24.367902 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jul 2 08:17:24.367909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:17:24.367916 kernel: trace event string verifier disabled Jul 2 08:17:24.367923 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:17:24.367930 kernel: rcu: RCU event tracing is enabled. Jul 2 08:17:24.367937 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:17:24.367944 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:17:24.367951 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:17:24.367957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:17:24.367966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:17:24.367973 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:17:24.367979 kernel: GICv3: 960 SPIs implemented Jul 2 08:17:24.367986 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:17:24.367993 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:17:24.367999 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 08:17:24.368006 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 08:17:24.368013 kernel: ITS: No ITS available, not enabling LPIs Jul 2 08:17:24.368020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:17:24.368027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:17:24.368034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 08:17:24.368046 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 08:17:24.368053 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 08:17:24.368060 kernel: Console: colour dummy device 80x25 Jul 2 08:17:24.368068 kernel: printk: console [tty1] enabled Jul 2 08:17:24.368075 kernel: ACPI: Core revision 20230628 Jul 2 08:17:24.368082 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 08:17:24.368089 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:17:24.368096 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:17:24.368103 kernel: SELinux: Initializing. Jul 2 08:17:24.368110 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368118 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368125 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:17:24.368132 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:17:24.368139 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 08:17:24.368146 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 08:17:24.368153 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 08:17:24.368160 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:17:24.368173 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:17:24.368181 kernel: Remapping and enabling EFI services. Jul 2 08:17:24.368188 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:17:24.368196 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:17:24.368204 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 08:17:24.368212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:17:24.368219 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 08:17:24.368226 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:17:24.368233 kernel: SMP: Total of 2 processors activated. Jul 2 08:17:24.368242 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:17:24.368250 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 08:17:24.368258 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 08:17:24.368265 kernel: CPU features: detected: CRC32 instructions Jul 2 08:17:24.368272 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 08:17:24.368279 kernel: CPU features: detected: LSE atomic instructions Jul 2 08:17:24.368287 kernel: CPU features: detected: Privileged Access Never Jul 2 08:17:24.368294 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:17:24.368301 kernel: alternatives: applying system-wide alternatives Jul 2 08:17:24.368310 kernel: devtmpfs: initialized Jul 2 08:17:24.368318 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:17:24.368325 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:17:24.368333 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:17:24.368340 kernel: SMBIOS 3.1.0 present. Jul 2 08:17:24.368347 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 08:17:24.368355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:17:24.368362 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:17:24.368370 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:17:24.368379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:17:24.368386 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:17:24.368394 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jul 2 08:17:24.368401 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:17:24.368408 kernel: cpuidle: using governor menu Jul 2 08:17:24.368415 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:17:24.368422 kernel: ASID allocator initialised with 32768 entries Jul 2 08:17:24.368430 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:17:24.368437 kernel: Serial: AMBA PL011 UART driver Jul 2 08:17:24.368446 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 08:17:24.368453 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 08:17:24.368460 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:17:24.368468 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:17:24.368475 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:17:24.368482 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:17:24.368490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:17:24.368498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:17:24.368505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:17:24.368514 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:17:24.368521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:17:24.368529 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:17:24.368536 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:17:24.368543 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:17:24.368551 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:17:24.368558 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:17:24.368565 kernel: ACPI: Interpreter enabled Jul 2 08:17:24.368572 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:17:24.368581 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 08:17:24.368589 kernel: printk: console [ttyAMA0] enabled Jul 2 08:17:24.368597 kernel: printk: bootconsole [pl11] disabled Jul 2 08:17:24.368604 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 08:17:24.368611 kernel: iommu: Default domain type: Translated Jul 2 08:17:24.368619 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:17:24.368626 kernel: efivars: Registered efivars operations Jul 2 08:17:24.368633 kernel: vgaarb: loaded Jul 2 08:17:24.368640 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:17:24.368649 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:17:24.368657 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:17:24.368664 kernel: pnp: PnP ACPI init Jul 2 08:17:24.368671 kernel: pnp: PnP ACPI: found 0 devices Jul 2 08:17:24.368679 kernel: NET: Registered PF_INET protocol family Jul 2 08:17:24.368686 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:17:24.368693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:17:24.368701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:17:24.368708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:17:24.368717 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:17:24.368724 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:17:24.368732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368739 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:17:24.368759 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:17:24.368766 kernel: kvm [1]: HYP mode not available Jul 2 08:17:24.368774 kernel: Initialise system trusted keyrings Jul 2 08:17:24.368781 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:17:24.368790 kernel: Key type asymmetric registered Jul 2 08:17:24.368798 kernel: Asymmetric key parser 'x509' registered Jul 2 08:17:24.368805 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:17:24.368812 kernel: io scheduler mq-deadline registered Jul 2 08:17:24.368820 kernel: io scheduler kyber registered Jul 2 08:17:24.368827 kernel: io scheduler bfq registered Jul 2 08:17:24.368834 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:17:24.368842 kernel: thunder_xcv, ver 1.0 Jul 2 08:17:24.368849 kernel: thunder_bgx, ver 1.0 Jul 2 08:17:24.368856 kernel: nicpf, ver 1.0 Jul 2 08:17:24.368865 kernel: nicvf, ver 1.0 Jul 2 08:17:24.369002 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:17:24.369076 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:17:23 UTC (1719908243) Jul 2 08:17:24.369087 kernel: efifb: probing for efifb Jul 2 08:17:24.369094 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 08:17:24.369102 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 08:17:24.369109 kernel: efifb: scrolling: redraw Jul 2 08:17:24.369119 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 08:17:24.369127 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 08:17:24.369134 kernel: fb0: EFI VGA frame buffer device Jul 2 08:17:24.369142 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 08:17:24.369149 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:17:24.369156 kernel: No ACPI PMU IRQ for CPU0 Jul 2 08:17:24.369164 kernel: No ACPI PMU IRQ for CPU1 Jul 2 08:17:24.369171 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 08:17:24.369179 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:17:24.369188 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:17:24.369195 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:17:24.369202 kernel: Segment Routing with IPv6 Jul 2 08:17:24.369210 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:17:24.369217 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:17:24.369225 kernel: Key type dns_resolver registered Jul 2 08:17:24.369232 kernel: registered taskstats version 1 Jul 2 08:17:24.369239 kernel: Loading compiled-in X.509 certificates Jul 2 08:17:24.369246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:17:24.369256 kernel: Key type .fscrypt registered Jul 2 08:17:24.369263 kernel: Key type fscrypt-provisioning registered Jul 2 08:17:24.369270 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:17:24.369278 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:17:24.369285 kernel: ima: No architecture policies found Jul 2 08:17:24.369293 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:17:24.369300 kernel: clk: Disabling unused clocks Jul 2 08:17:24.369307 kernel: Freeing unused kernel memory: 39040K Jul 2 08:17:24.369315 kernel: Run /init as init process Jul 2 08:17:24.369324 kernel: with arguments: Jul 2 08:17:24.369331 kernel: /init Jul 2 08:17:24.369338 kernel: with environment: Jul 2 08:17:24.369345 kernel: HOME=/ Jul 2 08:17:24.369353 kernel: TERM=linux Jul 2 08:17:24.369360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:17:24.369369 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:17:24.369378 systemd[1]: Detected virtualization microsoft. Jul 2 08:17:24.369388 systemd[1]: Detected architecture arm64. Jul 2 08:17:24.369395 systemd[1]: Running in initrd. Jul 2 08:17:24.369403 systemd[1]: No hostname configured, using default hostname. Jul 2 08:17:24.369411 systemd[1]: Hostname set to . Jul 2 08:17:24.369419 systemd[1]: Initializing machine ID from random generator. Jul 2 08:17:24.369426 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:17:24.369434 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:17:24.369442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:17:24.369452 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:17:24.369460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:17:24.369468 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:17:24.369476 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:17:24.369485 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:17:24.369493 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:17:24.369501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:17:24.369511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:17:24.369519 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:17:24.369527 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:17:24.369534 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:17:24.369542 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:17:24.369550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:17:24.369558 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:17:24.369566 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:17:24.369575 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:17:24.369583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:17:24.369591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:17:24.369599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:17:24.369607 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:17:24.369615 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:17:24.369623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:17:24.369631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:17:24.369638 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:17:24.369648 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:17:24.369670 systemd-journald[217]: Collecting audit messages is disabled. Jul 2 08:17:24.369690 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:17:24.369699 systemd-journald[217]: Journal started Jul 2 08:17:24.369720 systemd-journald[217]: Runtime Journal (/run/log/journal/285ca347655144369a506b8734d4a0e5) is 8.0M, max 78.6M, 70.6M free. Jul 2 08:17:24.388554 systemd-modules-load[218]: Inserted module 'overlay' Jul 2 08:17:24.396121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:24.415261 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:17:24.416159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:17:24.448871 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:17:24.448901 kernel: Bridge firewalling registered Jul 2 08:17:24.432642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:17:24.447783 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 2 08:17:24.453767 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:17:24.463538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:17:24.479228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:24.504065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:17:24.512939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:17:24.542979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:17:24.562898 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:17:24.571476 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:24.600944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:17:24.608358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:17:24.634118 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:17:24.642974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:17:24.667623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:17:24.692964 dracut-cmdline[252]: dracut-dracut-053 Jul 2 08:17:24.692964 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:17:24.679970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:17:24.733521 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:17:24.775960 systemd-resolved[269]: Positive Trust Anchors: Jul 2 08:17:24.780733 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:17:24.780778 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:17:24.783646 systemd-resolved[269]: Defaulting to hostname 'linux'. Jul 2 08:17:24.784560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:17:24.792040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:17:24.906775 kernel: SCSI subsystem initialized Jul 2 08:17:24.914786 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:17:24.925889 kernel: iscsi: registered transport (tcp) Jul 2 08:17:24.944347 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:17:24.944370 kernel: QLogic iSCSI HBA Driver Jul 2 08:17:24.986332 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:17:25.007899 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:17:25.040695 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:17:25.040762 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:17:25.047490 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:17:25.101771 kernel: raid6: neonx8 gen() 15770 MB/s Jul 2 08:17:25.118753 kernel: raid6: neonx4 gen() 15670 MB/s Jul 2 08:17:25.138756 kernel: raid6: neonx2 gen() 13215 MB/s Jul 2 08:17:25.159755 kernel: raid6: neonx1 gen() 10510 MB/s Jul 2 08:17:25.179753 kernel: raid6: int64x8 gen() 6960 MB/s Jul 2 08:17:25.199753 kernel: raid6: int64x4 gen() 7341 MB/s Jul 2 08:17:25.220755 kernel: raid6: int64x2 gen() 6131 MB/s Jul 2 08:17:25.244286 kernel: raid6: int64x1 gen() 5059 MB/s Jul 2 08:17:25.244312 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Jul 2 08:17:25.269002 kernel: raid6: .... xor() 11913 MB/s, rmw enabled Jul 2 08:17:25.269014 kernel: raid6: using neon recovery algorithm Jul 2 08:17:25.283899 kernel: xor: measuring software checksum speed Jul 2 08:17:25.283916 kernel: 8regs : 19869 MB/sec Jul 2 08:17:25.287970 kernel: 32regs : 19659 MB/sec Jul 2 08:17:25.292061 kernel: arm64_neon : 27215 MB/sec Jul 2 08:17:25.296454 kernel: xor: using function: arm64_neon (27215 MB/sec) Jul 2 08:17:25.348769 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:17:25.360676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:17:25.379931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:17:25.404637 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jul 2 08:17:25.411067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:17:25.430054 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:17:25.448696 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Jul 2 08:17:25.480882 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:17:25.499072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:17:25.537627 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:17:25.558078 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:17:25.596291 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:17:25.614866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:17:25.633799 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:17:25.652121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:17:25.671780 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 08:17:25.679023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:17:25.700392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:17:25.770018 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:17:25.770053 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 08:17:25.770064 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 08:17:25.770074 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 08:17:25.770083 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 08:17:25.770101 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 08:17:25.770111 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:17:25.770121 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 08:17:25.770297 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 08:17:25.700558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:25.757663 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:17:25.806269 kernel: scsi host0: storvsc_host_t Jul 2 08:17:25.806464 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 08:17:25.806513 kernel: scsi host1: storvsc_host_t Jul 2 08:17:25.795760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:17:25.820378 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 08:17:25.795998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:25.833989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:25.873464 kernel: PTP clock support registered Jul 2 08:17:25.873533 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: VF slot 1 added Jul 2 08:17:25.867314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:25.873963 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:17:25.907732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:17:25.927493 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 08:17:25.927521 kernel: hv_vmbus: registering driver hv_utils Jul 2 08:17:25.927531 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 08:17:25.927540 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 08:17:25.907867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:25.723301 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 08:17:25.730744 systemd-journald[217]: Time jumped backwards, rotating. Jul 2 08:17:25.722072 systemd-resolved[269]: Clock change detected. Flushing caches. Jul 2 08:17:25.736471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:25.767439 kernel: hv_vmbus: registering driver hv_pci Jul 2 08:17:25.767464 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 08:17:25.791715 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 08:17:25.791731 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 08:17:25.861793 kernel: hv_pci 89458fee-a3b4-4d91-91f5-d0f906b041d5: PCI VMBus probing: Using version 0x10004 Jul 2 08:17:25.901703 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 08:17:25.901855 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 08:17:25.901944 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 08:17:25.902037 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 08:17:25.902132 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 08:17:25.902218 kernel: hv_pci 89458fee-a3b4-4d91-91f5-d0f906b041d5: PCI host bridge to bus a3b4:00 Jul 2 08:17:25.902304 kernel: pci_bus a3b4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 08:17:25.902465 kernel: pci_bus a3b4:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 08:17:25.902546 kernel: pci a3b4:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 08:17:25.902646 kernel: pci a3b4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 08:17:25.902732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:25.902746 kernel: pci a3b4:00:02.0: enabling Extended Tags Jul 2 08:17:25.902828 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 08:17:25.903032 kernel: pci a3b4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a3b4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 08:17:25.903152 kernel: pci_bus a3b4:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 08:17:25.903234 kernel: pci a3b4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 08:17:25.800160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:25.827625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:17:25.920656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:25.964386 kernel: mlx5_core a3b4:00:02.0: enabling device (0000 -> 0002) Jul 2 08:17:26.190431 kernel: mlx5_core a3b4:00:02.0: firmware version: 16.30.1284 Jul 2 08:17:26.190569 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: VF registering: eth1 Jul 2 08:17:26.190674 kernel: mlx5_core a3b4:00:02.0 eth1: joined to eth0 Jul 2 08:17:26.190797 kernel: mlx5_core a3b4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 2 08:17:26.200356 kernel: mlx5_core a3b4:00:02.0 enP41908s1: renamed from eth1 Jul 2 08:17:26.432245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 08:17:26.580581 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (493) Jul 2 08:17:26.598256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 08:17:26.612615 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 08:17:26.724234 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (497) Jul 2 08:17:26.740672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 08:17:26.748154 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 08:17:26.788664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:17:26.827564 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:26.839385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:27.841446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:27.842297 disk-uuid[606]: The operation has completed successfully. Jul 2 08:17:27.903168 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:17:27.903278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:17:27.944446 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:17:27.958131 sh[692]: Success Jul 2 08:17:27.977358 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:17:28.339784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:17:28.357495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:17:28.368935 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:17:28.404317 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:17:28.404370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:28.411557 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:17:28.416850 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:17:28.421796 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:17:28.994408 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:17:28.999600 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:17:29.020659 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:17:29.029532 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:17:29.073064 kernel: BTRFS info (device sda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:29.073125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:29.073135 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:17:29.128342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:17:29.154486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:17:29.172234 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 08:17:29.186047 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:17:29.191372 kernel: BTRFS info (device sda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:29.192025 systemd-networkd[866]: lo: Link UP Jul 2 08:17:29.192039 systemd-networkd[866]: lo: Gained carrier Jul 2 08:17:29.194132 systemd-networkd[866]: Enumeration completed Jul 2 08:17:29.195860 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:17:29.202528 systemd[1]: Reached target network.target - Network. Jul 2 08:17:29.204435 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:29.204438 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:17:29.220707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:17:29.256607 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:17:29.317329 kernel: mlx5_core a3b4:00:02.0 enP41908s1: Link up Jul 2 08:17:29.359911 systemd-networkd[866]: enP41908s1: Link UP Jul 2 08:17:29.364670 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: Data path switched to VF: enP41908s1 Jul 2 08:17:29.360033 systemd-networkd[866]: eth0: Link UP Jul 2 08:17:29.360164 systemd-networkd[866]: eth0: Gained carrier Jul 2 08:17:29.360172 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:29.369565 systemd-networkd[866]: enP41908s1: Gained carrier Jul 2 08:17:29.399357 systemd-networkd[866]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 08:17:30.834198 ignition[877]: Ignition 2.18.0 Jul 2 08:17:30.834213 ignition[877]: Stage: fetch-offline Jul 2 08:17:30.838117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:17:30.834259 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:30.834268 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:30.834406 ignition[877]: parsed url from cmdline: "" Jul 2 08:17:30.861623 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:17:30.834410 ignition[877]: no config URL provided Jul 2 08:17:30.834415 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:17:30.834423 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:17:30.834427 ignition[877]: failed to fetch config: resource requires networking Jul 2 08:17:30.834616 ignition[877]: Ignition finished successfully Jul 2 08:17:30.883834 ignition[886]: Ignition 2.18.0 Jul 2 08:17:30.883845 ignition[886]: Stage: fetch Jul 2 08:17:30.884171 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:30.884185 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:30.884332 ignition[886]: parsed url from cmdline: "" Jul 2 08:17:30.884336 ignition[886]: no config URL provided Jul 2 08:17:30.884344 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:17:30.884352 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:17:30.884376 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 08:17:30.986491 systemd-networkd[866]: eth0: Gained IPv6LL Jul 2 08:17:31.000826 ignition[886]: GET result: OK Jul 2 08:17:31.000936 ignition[886]: config has been read from IMDS userdata Jul 2 08:17:31.000976 ignition[886]: parsing config with SHA512: 1245e8ecce5f2a63a0470793ea04d07123e2157313d24f0038ac3f7064f72dca2a4361ceadaf65075057079477d5b22e9729c761bd8e61d177b590e501cf7aef Jul 2 08:17:31.004835 unknown[886]: fetched base config from "system" Jul 2 08:17:31.005280 ignition[886]: fetch: fetch complete Jul 2 08:17:31.004842 unknown[886]: fetched base config from "system" Jul 2 08:17:31.005285 ignition[886]: fetch: fetch passed Jul 2 08:17:31.004847 unknown[886]: fetched user config from "azure" Jul 2 08:17:31.005360 ignition[886]: Ignition finished successfully Jul 2 08:17:31.011032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:17:31.036655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:17:31.063465 ignition[893]: Ignition 2.18.0 Jul 2 08:17:31.063477 ignition[893]: Stage: kargs Jul 2 08:17:31.069543 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:17:31.063676 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:31.063686 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:31.064812 ignition[893]: kargs: kargs passed Jul 2 08:17:31.089674 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:17:31.064871 ignition[893]: Ignition finished successfully Jul 2 08:17:31.114587 ignition[900]: Ignition 2.18.0 Jul 2 08:17:31.114594 ignition[900]: Stage: disks Jul 2 08:17:31.117675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:17:31.114754 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:31.125762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:17:31.114764 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:31.135216 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:17:31.115681 ignition[900]: disks: disks passed Jul 2 08:17:31.147477 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:17:31.115734 ignition[900]: Ignition finished successfully Jul 2 08:17:31.158161 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:17:31.170676 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:17:31.197559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:17:31.277515 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 08:17:31.288359 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:17:31.308519 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:17:31.371519 systemd-networkd[866]: enP41908s1: Gained IPv6LL Jul 2 08:17:31.378131 kernel: EXT4-fs (sda9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:17:31.377820 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:17:31.387527 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:17:31.461428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:17:31.469495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:17:31.483636 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 08:17:31.492563 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:17:31.492617 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:17:31.506331 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:17:31.557839 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (920) Jul 2 08:17:31.557880 kernel: BTRFS info (device sda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:31.557891 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:31.528967 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:17:31.574865 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:17:31.583326 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 08:17:31.584926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:17:32.589420 coreos-metadata[922]: Jul 02 08:17:32.589 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 08:17:32.599637 coreos-metadata[922]: Jul 02 08:17:32.599 INFO Fetch successful Jul 2 08:17:32.599637 coreos-metadata[922]: Jul 02 08:17:32.599 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 08:17:32.616508 coreos-metadata[922]: Jul 02 08:17:32.613 INFO Fetch successful Jul 2 08:17:32.622486 coreos-metadata[922]: Jul 02 08:17:32.618 INFO wrote hostname ci-3975.1.1-a-7c4c792b73 to /sysroot/etc/hostname Jul 2 08:17:32.623187 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 08:17:32.905117 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:17:32.940388 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:17:32.946808 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:17:32.953020 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:17:34.198876 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:17:34.212609 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:17:34.225609 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:17:34.242122 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:17:34.249871 kernel: BTRFS info (device sda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:34.275185 ignition[1038]: INFO : Ignition 2.18.0 Jul 2 08:17:34.275185 ignition[1038]: INFO : Stage: mount Jul 2 08:17:34.285184 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:34.285184 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:34.285184 ignition[1038]: INFO : mount: mount passed Jul 2 08:17:34.285184 ignition[1038]: INFO : Ignition finished successfully Jul 2 08:17:34.281889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:17:34.290596 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:17:34.317569 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:17:34.335591 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:17:34.373339 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Jul 2 08:17:34.388114 kernel: BTRFS info (device sda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:34.388166 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:34.392433 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:17:34.400650 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 08:17:34.401863 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:17:34.426776 ignition[1067]: INFO : Ignition 2.18.0 Jul 2 08:17:34.426776 ignition[1067]: INFO : Stage: files Jul 2 08:17:34.435047 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:34.435047 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:34.435047 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:17:34.435047 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:17:34.435047 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:17:34.526065 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:17:34.533631 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:17:34.533631 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:17:34.528848 unknown[1067]: wrote ssh authorized keys file for user: core Jul 2 08:17:34.564057 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:17:34.575357 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:17:34.732354 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:17:34.932653 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:17:34.932653 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 08:17:35.366754 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 08:17:35.538341 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:35.538341 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 08:17:35.579190 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: files passed Jul 2 08:17:35.589979 ignition[1067]: INFO : Ignition finished successfully Jul 2 08:17:35.593159 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:17:35.638701 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:17:35.657545 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:17:35.665845 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:17:35.707455 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:17:35.707455 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:17:35.665949 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:17:35.743463 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:17:35.706440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:17:35.713989 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:17:35.743639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:17:35.784576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:17:35.785391 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:17:35.797932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:17:35.803850 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:17:35.816716 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:17:35.828591 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:17:35.859336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:17:35.875592 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:17:35.893471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:17:35.900502 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:17:35.913337 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:17:35.925961 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:17:35.926137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:17:35.942607 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:17:35.948554 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:17:35.959890 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:17:35.971369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:17:35.982909 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:17:35.995070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:17:36.007302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:17:36.020581 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:17:36.031648 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:17:36.043995 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:17:36.053894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:17:36.054066 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:17:36.069710 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:17:36.081350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:17:36.093492 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:17:36.093607 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:17:36.106347 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:17:36.106521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:17:36.124201 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:17:36.124393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:17:36.138797 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:17:36.138951 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:17:36.149680 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 08:17:36.149833 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 08:17:36.219661 ignition[1120]: INFO : Ignition 2.18.0 Jul 2 08:17:36.219661 ignition[1120]: INFO : Stage: umount Jul 2 08:17:36.219661 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:36.219661 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:36.219661 ignition[1120]: INFO : umount: umount passed Jul 2 08:17:36.219661 ignition[1120]: INFO : Ignition finished successfully Jul 2 08:17:36.183471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:17:36.192728 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:17:36.192976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:17:36.202619 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:17:36.225269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:17:36.225545 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:17:36.239173 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:17:36.239364 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:17:36.260366 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:17:36.261188 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:17:36.261299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:17:36.279067 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:17:36.279163 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:17:36.287539 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:17:36.287618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:17:36.298666 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:17:36.298734 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:17:36.309377 systemd[1]: Stopped target network.target - Network. Jul 2 08:17:36.324210 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:17:36.324284 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:17:36.336521 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:17:36.341516 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:17:36.345332 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:17:36.354690 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:17:36.365748 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:17:36.377395 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:17:36.377450 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:17:36.390056 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:17:36.390102 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:17:36.400877 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:17:36.400936 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:17:36.411438 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:17:36.411489 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:17:36.423491 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:17:36.435084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:17:36.447389 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:17:36.447477 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:17:36.452378 systemd-networkd[866]: eth0: DHCPv6 lease lost Jul 2 08:17:36.468460 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:17:36.724170 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: Data path switched from VF: enP41908s1 Jul 2 08:17:36.468568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:17:36.482542 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:17:36.482711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:17:36.494920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:17:36.495001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:17:36.524832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:17:36.538130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:17:36.538204 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:17:36.554139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:17:36.554217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:17:36.565381 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:17:36.565434 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:17:36.577581 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:17:36.577626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:17:36.589397 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:17:36.634219 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:17:36.634364 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:17:36.646340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:17:36.646510 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:17:36.660491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:17:36.660582 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:17:36.668318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:17:36.668367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:17:36.679864 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:17:36.679926 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:17:36.708704 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:17:36.708776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:17:36.724264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:17:36.724359 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:36.737215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:17:36.737297 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:17:36.768557 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:17:36.997190 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 2 08:17:36.781410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:17:36.781520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:17:36.796567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:17:36.796635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:36.811433 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:17:36.811540 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:17:36.835831 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:17:36.835988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:17:36.847858 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:17:36.879561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:17:36.897651 systemd[1]: Switching root. Jul 2 08:17:37.091461 systemd-journald[217]: Journal stopped Jul 2 08:17:24.367328 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 08:17:24.367350 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:17:24.367358 kernel: KASLR enabled Jul 2 08:17:24.367366 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 08:17:24.367371 kernel: printk: bootconsole [pl11] enabled Jul 2 08:17:24.367377 kernel: efi: EFI v2.7 by EDK II Jul 2 08:17:24.367384 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jul 2 08:17:24.367390 kernel: random: crng init done Jul 2 08:17:24.367396 kernel: ACPI: Early table checksum verification disabled Jul 2 08:17:24.367402 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 08:17:24.367408 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367414 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367422 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 08:17:24.367428 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367435 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367442 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367448 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367456 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367463 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367469 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 08:17:24.367475 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:17:24.367482 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 08:17:24.367488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 2 08:17:24.367494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 2 08:17:24.367501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 2 08:17:24.367507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 2 08:17:24.367513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 2 08:17:24.367520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 2 08:17:24.367527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 2 08:17:24.367534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 2 08:17:24.367540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 2 08:17:24.367547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 2 08:17:24.367553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 2 08:17:24.367559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 2 08:17:24.367565 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 2 08:17:24.367572 kernel: Zone ranges: Jul 2 08:17:24.367578 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 08:17:24.367584 kernel: DMA32 empty Jul 2 08:17:24.367590 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 08:17:24.367598 kernel: Movable zone start for each node Jul 2 08:17:24.367608 kernel: Early memory node ranges Jul 2 08:17:24.367614 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 08:17:24.367621 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 08:17:24.367628 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 08:17:24.367636 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 08:17:24.367643 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 08:17:24.367649 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 08:17:24.367656 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 08:17:24.367663 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 08:17:24.367669 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 08:17:24.367676 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 08:17:24.367683 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 08:17:24.367689 kernel: psci: probing for conduit method from ACPI. Jul 2 08:17:24.367696 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 08:17:24.367703 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:17:24.367709 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 08:17:24.367718 kernel: psci: SMC Calling Convention v1.4 Jul 2 08:17:24.367724 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 2 08:17:24.367731 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 2 08:17:24.367738 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:17:24.367755 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:17:24.367762 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:17:24.367769 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:17:24.367775 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:17:24.367782 kernel: CPU features: detected: Hardware dirty bit management Jul 2 08:17:24.367789 kernel: CPU features: detected: Spectre-BHB Jul 2 08:17:24.367796 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 08:17:24.367802 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 08:17:24.367811 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 08:17:24.367818 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 08:17:24.367825 kernel: alternatives: applying boot alternatives Jul 2 08:17:24.367832 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:17:24.367840 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:17:24.367846 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:17:24.367853 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:17:24.367860 kernel: Fallback order for Node 0: 0 Jul 2 08:17:24.367867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 08:17:24.367874 kernel: Policy zone: Normal Jul 2 08:17:24.367882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:17:24.367888 kernel: software IO TLB: area num 2. Jul 2 08:17:24.367895 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jul 2 08:17:24.367902 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jul 2 08:17:24.367909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:17:24.367916 kernel: trace event string verifier disabled Jul 2 08:17:24.367923 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:17:24.367930 kernel: rcu: RCU event tracing is enabled. Jul 2 08:17:24.367937 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:17:24.367944 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:17:24.367951 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:17:24.367957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:17:24.367966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:17:24.367973 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:17:24.367979 kernel: GICv3: 960 SPIs implemented Jul 2 08:17:24.367986 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:17:24.367993 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:17:24.367999 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 08:17:24.368006 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 08:17:24.368013 kernel: ITS: No ITS available, not enabling LPIs Jul 2 08:17:24.368020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:17:24.368027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:17:24.368034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 08:17:24.368046 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 08:17:24.368053 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 08:17:24.368060 kernel: Console: colour dummy device 80x25 Jul 2 08:17:24.368068 kernel: printk: console [tty1] enabled Jul 2 08:17:24.368075 kernel: ACPI: Core revision 20230628 Jul 2 08:17:24.368082 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 08:17:24.368089 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:17:24.368096 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:17:24.368103 kernel: SELinux: Initializing. Jul 2 08:17:24.368110 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368118 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368125 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:17:24.368132 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:17:24.368139 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 08:17:24.368146 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 08:17:24.368153 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 08:17:24.368160 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:17:24.368173 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:17:24.368181 kernel: Remapping and enabling EFI services. Jul 2 08:17:24.368188 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:17:24.368196 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:17:24.368204 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 08:17:24.368212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:17:24.368219 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 08:17:24.368226 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:17:24.368233 kernel: SMP: Total of 2 processors activated. Jul 2 08:17:24.368242 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:17:24.368250 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 08:17:24.368258 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 08:17:24.368265 kernel: CPU features: detected: CRC32 instructions Jul 2 08:17:24.368272 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 08:17:24.368279 kernel: CPU features: detected: LSE atomic instructions Jul 2 08:17:24.368287 kernel: CPU features: detected: Privileged Access Never Jul 2 08:17:24.368294 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:17:24.368301 kernel: alternatives: applying system-wide alternatives Jul 2 08:17:24.368310 kernel: devtmpfs: initialized Jul 2 08:17:24.368318 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:17:24.368325 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:17:24.368333 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:17:24.368340 kernel: SMBIOS 3.1.0 present. Jul 2 08:17:24.368347 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 08:17:24.368355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:17:24.368362 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:17:24.368370 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:17:24.368379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:17:24.368386 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:17:24.368394 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jul 2 08:17:24.368401 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:17:24.368408 kernel: cpuidle: using governor menu Jul 2 08:17:24.368415 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:17:24.368422 kernel: ASID allocator initialised with 32768 entries Jul 2 08:17:24.368430 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:17:24.368437 kernel: Serial: AMBA PL011 UART driver Jul 2 08:17:24.368446 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 08:17:24.368453 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 08:17:24.368460 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:17:24.368468 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:17:24.368475 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:17:24.368482 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:17:24.368490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:17:24.368498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:17:24.368505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:17:24.368514 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:17:24.368521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:17:24.368529 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:17:24.368536 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:17:24.368543 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:17:24.368551 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:17:24.368558 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:17:24.368565 kernel: ACPI: Interpreter enabled Jul 2 08:17:24.368572 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:17:24.368581 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 08:17:24.368589 kernel: printk: console [ttyAMA0] enabled Jul 2 08:17:24.368597 kernel: printk: bootconsole [pl11] disabled Jul 2 08:17:24.368604 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 08:17:24.368611 kernel: iommu: Default domain type: Translated Jul 2 08:17:24.368619 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:17:24.368626 kernel: efivars: Registered efivars operations Jul 2 08:17:24.368633 kernel: vgaarb: loaded Jul 2 08:17:24.368640 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:17:24.368649 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:17:24.368657 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:17:24.368664 kernel: pnp: PnP ACPI init Jul 2 08:17:24.368671 kernel: pnp: PnP ACPI: found 0 devices Jul 2 08:17:24.368679 kernel: NET: Registered PF_INET protocol family Jul 2 08:17:24.368686 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:17:24.368693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:17:24.368701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:17:24.368708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:17:24.368717 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:17:24.368724 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:17:24.368732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368739 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:17:24.368752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:17:24.368759 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:17:24.368766 kernel: kvm [1]: HYP mode not available Jul 2 08:17:24.368774 kernel: Initialise system trusted keyrings Jul 2 08:17:24.368781 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:17:24.368790 kernel: Key type asymmetric registered Jul 2 08:17:24.368798 kernel: Asymmetric key parser 'x509' registered Jul 2 08:17:24.368805 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:17:24.368812 kernel: io scheduler mq-deadline registered Jul 2 08:17:24.368820 kernel: io scheduler kyber registered Jul 2 08:17:24.368827 kernel: io scheduler bfq registered Jul 2 08:17:24.368834 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:17:24.368842 kernel: thunder_xcv, ver 1.0 Jul 2 08:17:24.368849 kernel: thunder_bgx, ver 1.0 Jul 2 08:17:24.368856 kernel: nicpf, ver 1.0 Jul 2 08:17:24.368865 kernel: nicvf, ver 1.0 Jul 2 08:17:24.369002 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:17:24.369076 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:17:23 UTC (1719908243) Jul 2 08:17:24.369087 kernel: efifb: probing for efifb Jul 2 08:17:24.369094 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 08:17:24.369102 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 08:17:24.369109 kernel: efifb: scrolling: redraw Jul 2 08:17:24.369119 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 08:17:24.369127 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 08:17:24.369134 kernel: fb0: EFI VGA frame buffer device Jul 2 08:17:24.369142 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 08:17:24.369149 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:17:24.369156 kernel: No ACPI PMU IRQ for CPU0 Jul 2 08:17:24.369164 kernel: No ACPI PMU IRQ for CPU1 Jul 2 08:17:24.369171 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 08:17:24.369179 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:17:24.369188 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:17:24.369195 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:17:24.369202 kernel: Segment Routing with IPv6 Jul 2 08:17:24.369210 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:17:24.369217 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:17:24.369225 kernel: Key type dns_resolver registered Jul 2 08:17:24.369232 kernel: registered taskstats version 1 Jul 2 08:17:24.369239 kernel: Loading compiled-in X.509 certificates Jul 2 08:17:24.369246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:17:24.369256 kernel: Key type .fscrypt registered Jul 2 08:17:24.369263 kernel: Key type fscrypt-provisioning registered Jul 2 08:17:24.369270 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:17:24.369278 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:17:24.369285 kernel: ima: No architecture policies found Jul 2 08:17:24.369293 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:17:24.369300 kernel: clk: Disabling unused clocks Jul 2 08:17:24.369307 kernel: Freeing unused kernel memory: 39040K Jul 2 08:17:24.369315 kernel: Run /init as init process Jul 2 08:17:24.369324 kernel: with arguments: Jul 2 08:17:24.369331 kernel: /init Jul 2 08:17:24.369338 kernel: with environment: Jul 2 08:17:24.369345 kernel: HOME=/ Jul 2 08:17:24.369353 kernel: TERM=linux Jul 2 08:17:24.369360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:17:24.369369 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:17:24.369378 systemd[1]: Detected virtualization microsoft. Jul 2 08:17:24.369388 systemd[1]: Detected architecture arm64. Jul 2 08:17:24.369395 systemd[1]: Running in initrd. Jul 2 08:17:24.369403 systemd[1]: No hostname configured, using default hostname. Jul 2 08:17:24.369411 systemd[1]: Hostname set to . Jul 2 08:17:24.369419 systemd[1]: Initializing machine ID from random generator. Jul 2 08:17:24.369426 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:17:24.369434 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:17:24.369442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:17:24.369452 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:17:24.369460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:17:24.369468 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:17:24.369476 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:17:24.369485 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:17:24.369493 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:17:24.369501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:17:24.369511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:17:24.369519 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:17:24.369527 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:17:24.369534 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:17:24.369542 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:17:24.369550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:17:24.369558 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:17:24.369566 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:17:24.369575 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:17:24.369583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:17:24.369591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:17:24.369599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:17:24.369607 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:17:24.369615 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:17:24.369623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:17:24.369631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:17:24.369638 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:17:24.369648 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:17:24.369670 systemd-journald[217]: Collecting audit messages is disabled. Jul 2 08:17:24.369690 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:17:24.369699 systemd-journald[217]: Journal started Jul 2 08:17:24.369720 systemd-journald[217]: Runtime Journal (/run/log/journal/285ca347655144369a506b8734d4a0e5) is 8.0M, max 78.6M, 70.6M free. Jul 2 08:17:24.388554 systemd-modules-load[218]: Inserted module 'overlay' Jul 2 08:17:24.396121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:24.415261 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:17:24.416159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:17:24.448871 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:17:24.448901 kernel: Bridge firewalling registered Jul 2 08:17:24.432642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:17:24.447783 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 2 08:17:24.453767 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:17:24.463538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:17:24.479228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:24.504065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:17:24.512939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:17:24.542979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:17:24.562898 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:17:24.571476 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:24.600944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:17:24.608358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:17:24.634118 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:17:24.642974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:17:24.667623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:17:24.692964 dracut-cmdline[252]: dracut-dracut-053 Jul 2 08:17:24.692964 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:17:24.679970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:17:24.733521 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:17:24.775960 systemd-resolved[269]: Positive Trust Anchors: Jul 2 08:17:24.780733 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:17:24.780778 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:17:24.783646 systemd-resolved[269]: Defaulting to hostname 'linux'. Jul 2 08:17:24.784560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:17:24.792040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:17:24.906775 kernel: SCSI subsystem initialized Jul 2 08:17:24.914786 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:17:24.925889 kernel: iscsi: registered transport (tcp) Jul 2 08:17:24.944347 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:17:24.944370 kernel: QLogic iSCSI HBA Driver Jul 2 08:17:24.986332 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:17:25.007899 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:17:25.040695 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:17:25.040762 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:17:25.047490 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:17:25.101771 kernel: raid6: neonx8 gen() 15770 MB/s Jul 2 08:17:25.118753 kernel: raid6: neonx4 gen() 15670 MB/s Jul 2 08:17:25.138756 kernel: raid6: neonx2 gen() 13215 MB/s Jul 2 08:17:25.159755 kernel: raid6: neonx1 gen() 10510 MB/s Jul 2 08:17:25.179753 kernel: raid6: int64x8 gen() 6960 MB/s Jul 2 08:17:25.199753 kernel: raid6: int64x4 gen() 7341 MB/s Jul 2 08:17:25.220755 kernel: raid6: int64x2 gen() 6131 MB/s Jul 2 08:17:25.244286 kernel: raid6: int64x1 gen() 5059 MB/s Jul 2 08:17:25.244312 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Jul 2 08:17:25.269002 kernel: raid6: .... xor() 11913 MB/s, rmw enabled Jul 2 08:17:25.269014 kernel: raid6: using neon recovery algorithm Jul 2 08:17:25.283899 kernel: xor: measuring software checksum speed Jul 2 08:17:25.283916 kernel: 8regs : 19869 MB/sec Jul 2 08:17:25.287970 kernel: 32regs : 19659 MB/sec Jul 2 08:17:25.292061 kernel: arm64_neon : 27215 MB/sec Jul 2 08:17:25.296454 kernel: xor: using function: arm64_neon (27215 MB/sec) Jul 2 08:17:25.348769 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:17:25.360676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:17:25.379931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:17:25.404637 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jul 2 08:17:25.411067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:17:25.430054 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:17:25.448696 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Jul 2 08:17:25.480882 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:17:25.499072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:17:25.537627 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:17:25.558078 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:17:25.596291 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:17:25.614866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:17:25.633799 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:17:25.652121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:17:25.671780 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 08:17:25.679023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:17:25.700392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:17:25.770018 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:17:25.770053 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 08:17:25.770064 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 08:17:25.770074 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 08:17:25.770083 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 08:17:25.770101 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 08:17:25.770111 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:17:25.770121 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 08:17:25.770297 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 08:17:25.700558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:25.757663 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:17:25.806269 kernel: scsi host0: storvsc_host_t Jul 2 08:17:25.806464 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 08:17:25.806513 kernel: scsi host1: storvsc_host_t Jul 2 08:17:25.795760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:17:25.820378 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 08:17:25.795998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:25.833989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:25.873464 kernel: PTP clock support registered Jul 2 08:17:25.873533 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: VF slot 1 added Jul 2 08:17:25.867314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:25.873963 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:17:25.907732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:17:25.927493 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 08:17:25.927521 kernel: hv_vmbus: registering driver hv_utils Jul 2 08:17:25.927531 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 08:17:25.927540 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 08:17:25.907867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:25.723301 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 08:17:25.730744 systemd-journald[217]: Time jumped backwards, rotating. Jul 2 08:17:25.722072 systemd-resolved[269]: Clock change detected. Flushing caches. Jul 2 08:17:25.736471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:25.767439 kernel: hv_vmbus: registering driver hv_pci Jul 2 08:17:25.767464 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 08:17:25.791715 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 08:17:25.791731 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 08:17:25.861793 kernel: hv_pci 89458fee-a3b4-4d91-91f5-d0f906b041d5: PCI VMBus probing: Using version 0x10004 Jul 2 08:17:25.901703 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 08:17:25.901855 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 08:17:25.901944 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 08:17:25.902037 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 08:17:25.902132 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 08:17:25.902218 kernel: hv_pci 89458fee-a3b4-4d91-91f5-d0f906b041d5: PCI host bridge to bus a3b4:00 Jul 2 08:17:25.902304 kernel: pci_bus a3b4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 08:17:25.902465 kernel: pci_bus a3b4:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 08:17:25.902546 kernel: pci a3b4:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 08:17:25.902646 kernel: pci a3b4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 08:17:25.902732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:25.902746 kernel: pci a3b4:00:02.0: enabling Extended Tags Jul 2 08:17:25.902828 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 08:17:25.903032 kernel: pci a3b4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a3b4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 08:17:25.903152 kernel: pci_bus a3b4:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 08:17:25.903234 kernel: pci a3b4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 08:17:25.800160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:25.827625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:17:25.920656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:25.964386 kernel: mlx5_core a3b4:00:02.0: enabling device (0000 -> 0002) Jul 2 08:17:26.190431 kernel: mlx5_core a3b4:00:02.0: firmware version: 16.30.1284 Jul 2 08:17:26.190569 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: VF registering: eth1 Jul 2 08:17:26.190674 kernel: mlx5_core a3b4:00:02.0 eth1: joined to eth0 Jul 2 08:17:26.190797 kernel: mlx5_core a3b4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 2 08:17:26.200356 kernel: mlx5_core a3b4:00:02.0 enP41908s1: renamed from eth1 Jul 2 08:17:26.432245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 08:17:26.580581 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (493) Jul 2 08:17:26.598256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 08:17:26.612615 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 08:17:26.724234 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (497) Jul 2 08:17:26.740672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 08:17:26.748154 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 08:17:26.788664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:17:26.827564 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:26.839385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:27.841446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:17:27.842297 disk-uuid[606]: The operation has completed successfully. Jul 2 08:17:27.903168 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:17:27.903278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:17:27.944446 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:17:27.958131 sh[692]: Success Jul 2 08:17:27.977358 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:17:28.339784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:17:28.357495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:17:28.368935 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:17:28.404317 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:17:28.404370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:28.411557 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:17:28.416850 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:17:28.421796 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:17:28.994408 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:17:28.999600 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:17:29.020659 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:17:29.029532 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:17:29.073064 kernel: BTRFS info (device sda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:29.073125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:29.073135 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:17:29.128342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:17:29.154486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:17:29.172234 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 08:17:29.186047 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:17:29.191372 kernel: BTRFS info (device sda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:29.192025 systemd-networkd[866]: lo: Link UP Jul 2 08:17:29.192039 systemd-networkd[866]: lo: Gained carrier Jul 2 08:17:29.194132 systemd-networkd[866]: Enumeration completed Jul 2 08:17:29.195860 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:17:29.202528 systemd[1]: Reached target network.target - Network. Jul 2 08:17:29.204435 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:29.204438 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:17:29.220707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:17:29.256607 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:17:29.317329 kernel: mlx5_core a3b4:00:02.0 enP41908s1: Link up Jul 2 08:17:29.359911 systemd-networkd[866]: enP41908s1: Link UP Jul 2 08:17:29.364670 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: Data path switched to VF: enP41908s1 Jul 2 08:17:29.360033 systemd-networkd[866]: eth0: Link UP Jul 2 08:17:29.360164 systemd-networkd[866]: eth0: Gained carrier Jul 2 08:17:29.360172 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:29.369565 systemd-networkd[866]: enP41908s1: Gained carrier Jul 2 08:17:29.399357 systemd-networkd[866]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 08:17:30.834198 ignition[877]: Ignition 2.18.0 Jul 2 08:17:30.834213 ignition[877]: Stage: fetch-offline Jul 2 08:17:30.838117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:17:30.834259 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:30.834268 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:30.834406 ignition[877]: parsed url from cmdline: "" Jul 2 08:17:30.861623 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:17:30.834410 ignition[877]: no config URL provided Jul 2 08:17:30.834415 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:17:30.834423 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:17:30.834427 ignition[877]: failed to fetch config: resource requires networking Jul 2 08:17:30.834616 ignition[877]: Ignition finished successfully Jul 2 08:17:30.883834 ignition[886]: Ignition 2.18.0 Jul 2 08:17:30.883845 ignition[886]: Stage: fetch Jul 2 08:17:30.884171 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:30.884185 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:30.884332 ignition[886]: parsed url from cmdline: "" Jul 2 08:17:30.884336 ignition[886]: no config URL provided Jul 2 08:17:30.884344 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:17:30.884352 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:17:30.884376 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 08:17:30.986491 systemd-networkd[866]: eth0: Gained IPv6LL Jul 2 08:17:31.000826 ignition[886]: GET result: OK Jul 2 08:17:31.000936 ignition[886]: config has been read from IMDS userdata Jul 2 08:17:31.000976 ignition[886]: parsing config with SHA512: 1245e8ecce5f2a63a0470793ea04d07123e2157313d24f0038ac3f7064f72dca2a4361ceadaf65075057079477d5b22e9729c761bd8e61d177b590e501cf7aef Jul 2 08:17:31.004835 unknown[886]: fetched base config from "system" Jul 2 08:17:31.005280 ignition[886]: fetch: fetch complete Jul 2 08:17:31.004842 unknown[886]: fetched base config from "system" Jul 2 08:17:31.005285 ignition[886]: fetch: fetch passed Jul 2 08:17:31.004847 unknown[886]: fetched user config from "azure" Jul 2 08:17:31.005360 ignition[886]: Ignition finished successfully Jul 2 08:17:31.011032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:17:31.036655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:17:31.063465 ignition[893]: Ignition 2.18.0 Jul 2 08:17:31.063477 ignition[893]: Stage: kargs Jul 2 08:17:31.069543 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:17:31.063676 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:31.063686 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:31.064812 ignition[893]: kargs: kargs passed Jul 2 08:17:31.089674 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:17:31.064871 ignition[893]: Ignition finished successfully Jul 2 08:17:31.114587 ignition[900]: Ignition 2.18.0 Jul 2 08:17:31.114594 ignition[900]: Stage: disks Jul 2 08:17:31.117675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:17:31.114754 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:31.125762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:17:31.114764 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:31.135216 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:17:31.115681 ignition[900]: disks: disks passed Jul 2 08:17:31.147477 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:17:31.115734 ignition[900]: Ignition finished successfully Jul 2 08:17:31.158161 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:17:31.170676 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:17:31.197559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:17:31.277515 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 08:17:31.288359 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:17:31.308519 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:17:31.371519 systemd-networkd[866]: enP41908s1: Gained IPv6LL Jul 2 08:17:31.378131 kernel: EXT4-fs (sda9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:17:31.377820 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:17:31.387527 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:17:31.461428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:17:31.469495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:17:31.483636 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 08:17:31.492563 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:17:31.492617 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:17:31.506331 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:17:31.557839 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (920) Jul 2 08:17:31.557880 kernel: BTRFS info (device sda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:31.557891 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:31.528967 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:17:31.574865 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:17:31.583326 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 08:17:31.584926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:17:32.589420 coreos-metadata[922]: Jul 02 08:17:32.589 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 08:17:32.599637 coreos-metadata[922]: Jul 02 08:17:32.599 INFO Fetch successful Jul 2 08:17:32.599637 coreos-metadata[922]: Jul 02 08:17:32.599 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 08:17:32.616508 coreos-metadata[922]: Jul 02 08:17:32.613 INFO Fetch successful Jul 2 08:17:32.622486 coreos-metadata[922]: Jul 02 08:17:32.618 INFO wrote hostname ci-3975.1.1-a-7c4c792b73 to /sysroot/etc/hostname Jul 2 08:17:32.623187 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 08:17:32.905117 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:17:32.940388 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:17:32.946808 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:17:32.953020 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:17:34.198876 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:17:34.212609 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:17:34.225609 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:17:34.242122 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:17:34.249871 kernel: BTRFS info (device sda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:34.275185 ignition[1038]: INFO : Ignition 2.18.0 Jul 2 08:17:34.275185 ignition[1038]: INFO : Stage: mount Jul 2 08:17:34.285184 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:34.285184 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:34.285184 ignition[1038]: INFO : mount: mount passed Jul 2 08:17:34.285184 ignition[1038]: INFO : Ignition finished successfully Jul 2 08:17:34.281889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:17:34.290596 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:17:34.317569 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:17:34.335591 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:17:34.373339 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Jul 2 08:17:34.388114 kernel: BTRFS info (device sda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:17:34.388166 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:17:34.392433 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:17:34.400650 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 08:17:34.401863 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:17:34.426776 ignition[1067]: INFO : Ignition 2.18.0 Jul 2 08:17:34.426776 ignition[1067]: INFO : Stage: files Jul 2 08:17:34.435047 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:34.435047 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:34.435047 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:17:34.435047 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:17:34.435047 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:17:34.526065 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:17:34.533631 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:17:34.533631 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:17:34.528848 unknown[1067]: wrote ssh authorized keys file for user: core Jul 2 08:17:34.564057 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:17:34.575357 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:17:34.732354 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:17:34.932653 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:17:34.932653 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:34.953204 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 08:17:35.366754 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 08:17:35.538341 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:17:35.538341 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 08:17:35.579190 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:17:35.589979 ignition[1067]: INFO : files: files passed Jul 2 08:17:35.589979 ignition[1067]: INFO : Ignition finished successfully Jul 2 08:17:35.593159 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:17:35.638701 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:17:35.657545 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:17:35.665845 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:17:35.707455 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:17:35.707455 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:17:35.665949 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:17:35.743463 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:17:35.706440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:17:35.713989 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:17:35.743639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:17:35.784576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:17:35.785391 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:17:35.797932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:17:35.803850 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:17:35.816716 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:17:35.828591 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:17:35.859336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:17:35.875592 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:17:35.893471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:17:35.900502 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:17:35.913337 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:17:35.925961 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:17:35.926137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:17:35.942607 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:17:35.948554 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:17:35.959890 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:17:35.971369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:17:35.982909 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:17:35.995070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:17:36.007302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:17:36.020581 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:17:36.031648 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:17:36.043995 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:17:36.053894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:17:36.054066 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:17:36.069710 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:17:36.081350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:17:36.093492 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:17:36.093607 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:17:36.106347 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:17:36.106521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:17:36.124201 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:17:36.124393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:17:36.138797 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:17:36.138951 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:17:36.149680 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 08:17:36.149833 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 08:17:36.219661 ignition[1120]: INFO : Ignition 2.18.0 Jul 2 08:17:36.219661 ignition[1120]: INFO : Stage: umount Jul 2 08:17:36.219661 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:17:36.219661 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:17:36.219661 ignition[1120]: INFO : umount: umount passed Jul 2 08:17:36.219661 ignition[1120]: INFO : Ignition finished successfully Jul 2 08:17:36.183471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:17:36.192728 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:17:36.192976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:17:36.202619 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:17:36.225269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:17:36.225545 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:17:36.239173 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:17:36.239364 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:17:36.260366 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:17:36.261188 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:17:36.261299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:17:36.279067 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:17:36.279163 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:17:36.287539 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:17:36.287618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:17:36.298666 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:17:36.298734 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:17:36.309377 systemd[1]: Stopped target network.target - Network. Jul 2 08:17:36.324210 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:17:36.324284 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:17:36.336521 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:17:36.341516 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:17:36.345332 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:17:36.354690 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:17:36.365748 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:17:36.377395 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:17:36.377450 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:17:36.390056 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:17:36.390102 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:17:36.400877 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:17:36.400936 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:17:36.411438 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:17:36.411489 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:17:36.423491 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:17:36.435084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:17:36.447389 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:17:36.447477 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:17:36.452378 systemd-networkd[866]: eth0: DHCPv6 lease lost Jul 2 08:17:36.468460 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:17:36.724170 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: Data path switched from VF: enP41908s1 Jul 2 08:17:36.468568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:17:36.482542 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:17:36.482711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:17:36.494920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:17:36.495001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:17:36.524832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:17:36.538130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:17:36.538204 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:17:36.554139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:17:36.554217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:17:36.565381 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:17:36.565434 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:17:36.577581 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:17:36.577626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:17:36.589397 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:17:36.634219 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:17:36.634364 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:17:36.646340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:17:36.646510 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:17:36.660491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:17:36.660582 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:17:36.668318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:17:36.668367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:17:36.679864 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:17:36.679926 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:17:36.708704 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:17:36.708776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:17:36.724264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:17:36.724359 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:17:36.737215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:17:36.737297 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:17:36.768557 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:17:36.997190 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 2 08:17:36.781410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:17:36.781520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:17:36.796567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:17:36.796635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:36.811433 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:17:36.811540 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:17:36.835831 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:17:36.835988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:17:36.847858 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:17:36.879561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:17:36.897651 systemd[1]: Switching root. Jul 2 08:17:37.091461 systemd-journald[217]: Journal stopped Jul 2 08:17:43.232862 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:17:43.232886 kernel: SELinux: policy capability open_perms=1 Jul 2 08:17:43.232897 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:17:43.232907 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:17:43.232915 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:17:43.232924 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:17:43.232934 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:17:43.232943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:17:43.232951 kernel: audit: type=1403 audit(1719908258.922:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:17:43.232961 systemd[1]: Successfully loaded SELinux policy in 230.961ms. Jul 2 08:17:43.232973 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.184ms. Jul 2 08:17:43.232983 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:17:43.232992 systemd[1]: Detected virtualization microsoft. Jul 2 08:17:43.233001 systemd[1]: Detected architecture arm64. Jul 2 08:17:43.233011 systemd[1]: Detected first boot. Jul 2 08:17:43.233023 systemd[1]: Hostname set to . Jul 2 08:17:43.233032 systemd[1]: Initializing machine ID from random generator. Jul 2 08:17:43.233041 zram_generator::config[1162]: No configuration found. Jul 2 08:17:43.233051 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:17:43.233061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:17:43.233070 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:17:43.233081 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:17:43.233091 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:17:43.233100 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:17:43.233110 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:17:43.233120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:17:43.233130 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:17:43.233140 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:17:43.233152 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:17:43.233161 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:17:43.233171 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:17:43.233181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:17:43.233190 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:17:43.233200 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:17:43.233209 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:17:43.233219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:17:43.233229 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 08:17:43.233240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:17:43.233249 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:17:43.233259 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:17:43.233271 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:17:43.233281 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:17:43.233291 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:17:43.233301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:17:43.233335 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:17:43.233346 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:17:43.233356 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:17:43.233366 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:17:43.233377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:17:43.233387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:17:43.233399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:17:43.233409 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:17:43.233419 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:17:43.233429 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:17:43.233439 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:17:43.233450 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:17:43.233459 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:17:43.233471 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:17:43.233482 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:17:43.233492 systemd[1]: Reached target machines.target - Containers. Jul 2 08:17:43.233502 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:17:43.233512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:17:43.233522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:17:43.233532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:17:43.233542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:17:43.233552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:17:43.233564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:17:43.233574 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:17:43.233584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:17:43.233595 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:17:43.233606 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:17:43.233616 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:17:43.233626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:17:43.233636 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:17:43.233647 kernel: fuse: init (API version 7.39) Jul 2 08:17:43.233657 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:17:43.233667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:17:43.233676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:17:43.233687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:17:43.233714 systemd-journald[1264]: Collecting audit messages is disabled. Jul 2 08:17:43.233737 systemd-journald[1264]: Journal started Jul 2 08:17:43.233758 systemd-journald[1264]: Runtime Journal (/run/log/journal/5690443dfce34ec5b32065b292f73d48) is 8.0M, max 78.6M, 70.6M free. Jul 2 08:17:42.087272 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:17:42.315654 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 2 08:17:42.316032 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:17:42.316385 systemd[1]: systemd-journald.service: Consumed 3.331s CPU time. Jul 2 08:17:43.242352 kernel: loop: module loaded Jul 2 08:17:43.242400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:17:43.271349 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:17:43.271454 systemd[1]: Stopped verity-setup.service. Jul 2 08:17:43.293081 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:17:43.293150 kernel: ACPI: bus type drm_connector registered Jul 2 08:17:43.293847 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:17:43.300064 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:17:43.306376 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:17:43.313017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:17:43.319688 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:17:43.327014 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:17:43.332733 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:17:43.339502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:17:43.346719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:17:43.346875 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:17:43.354054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:17:43.354191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:17:43.360791 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:17:43.360933 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:17:43.367795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:17:43.367937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:17:43.375449 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:17:43.375579 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:17:43.382400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:17:43.382533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:17:43.389060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:17:43.395906 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:17:43.403326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:17:43.410573 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:17:43.427523 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:17:43.440423 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:17:43.448087 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:17:43.455646 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:17:43.455689 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:17:43.463600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:17:43.472437 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:17:43.480857 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:17:43.486819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:17:43.495594 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:17:43.502963 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:17:43.509954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:17:43.513563 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:17:43.522596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:17:43.523741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:17:43.532567 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:17:43.550567 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:17:43.560141 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:17:43.569715 systemd-journald[1264]: Time spent on flushing to /var/log/journal/5690443dfce34ec5b32065b292f73d48 is 58.852ms for 898 entries. Jul 2 08:17:43.569715 systemd-journald[1264]: System Journal (/var/log/journal/5690443dfce34ec5b32065b292f73d48) is 11.8M, max 2.6G, 2.6G free. Jul 2 08:17:43.725007 systemd-journald[1264]: Received client request to flush runtime journal. Jul 2 08:17:43.725179 systemd-journald[1264]: /var/log/journal/5690443dfce34ec5b32065b292f73d48/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 2 08:17:43.725238 systemd-journald[1264]: Rotating system journal. Jul 2 08:17:43.725337 kernel: loop0: detected capacity change from 0 to 59672 Jul 2 08:17:43.725369 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:17:43.580714 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:17:43.592039 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:17:43.616602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:17:43.625009 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:17:43.638030 udevadm[1299]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:17:43.638785 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:17:43.653533 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:17:43.729580 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:17:43.750746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:17:43.770571 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:17:43.771232 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:17:43.778857 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:17:43.789598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:17:43.871178 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jul 2 08:17:43.871198 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jul 2 08:17:43.875790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:17:44.453337 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:17:44.509346 kernel: loop1: detected capacity change from 0 to 56592 Jul 2 08:17:45.084624 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:17:45.106483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:17:45.126766 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jul 2 08:17:45.131335 kernel: loop2: detected capacity change from 0 to 113672 Jul 2 08:17:45.283333 kernel: loop3: detected capacity change from 0 to 194096 Jul 2 08:17:45.319339 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 08:17:45.331333 kernel: loop5: detected capacity change from 0 to 56592 Jul 2 08:17:45.332179 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:17:45.351626 kernel: loop6: detected capacity change from 0 to 113672 Jul 2 08:17:45.352698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:17:45.366348 kernel: loop7: detected capacity change from 0 to 194096 Jul 2 08:17:45.375327 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 2 08:17:45.375808 (sd-merge)[1324]: Merged extensions into '/usr'. Jul 2 08:17:45.399041 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:17:45.399207 systemd[1]: Reloading... Jul 2 08:17:45.494470 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1342) Jul 2 08:17:45.542264 zram_generator::config[1378]: No configuration found. Jul 2 08:17:45.542403 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:17:45.603236 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 08:17:45.603361 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 08:17:45.619092 kernel: hv_vmbus: registering driver hv_balloon Jul 2 08:17:45.619208 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 08:17:45.619235 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 08:17:45.627709 kernel: Console: switching to colour dummy device 80x25 Jul 2 08:17:45.627813 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 2 08:17:45.640901 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 08:17:45.730288 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:17:45.744360 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1331) Jul 2 08:17:45.835055 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 08:17:45.835476 systemd[1]: Reloading finished in 435 ms. Jul 2 08:17:45.862185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:17:45.895446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 08:17:45.911572 systemd[1]: Starting ensure-sysext.service... Jul 2 08:17:45.918268 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:17:45.927525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:17:45.941491 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:17:45.953034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:17:45.966491 systemd[1]: Reloading requested from client PID 1477 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:17:45.966516 systemd[1]: Reloading... Jul 2 08:17:45.969922 systemd-tmpfiles[1480]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:17:45.970392 systemd-tmpfiles[1480]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:17:45.971080 systemd-tmpfiles[1480]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:17:45.971289 systemd-tmpfiles[1480]: ACLs are not supported, ignoring. Jul 2 08:17:45.971373 systemd-tmpfiles[1480]: ACLs are not supported, ignoring. Jul 2 08:17:45.976013 systemd-tmpfiles[1480]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:17:45.976023 systemd-tmpfiles[1480]: Skipping /boot Jul 2 08:17:45.988987 systemd-tmpfiles[1480]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:17:45.989955 systemd-tmpfiles[1480]: Skipping /boot Jul 2 08:17:46.077349 zram_generator::config[1517]: No configuration found. Jul 2 08:17:46.123293 systemd-networkd[1332]: lo: Link UP Jul 2 08:17:46.123635 systemd-networkd[1332]: lo: Gained carrier Jul 2 08:17:46.126166 systemd-networkd[1332]: Enumeration completed Jul 2 08:17:46.126635 systemd-networkd[1332]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:46.126727 systemd-networkd[1332]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:17:46.176332 kernel: mlx5_core a3b4:00:02.0 enP41908s1: Link up Jul 2 08:17:46.195169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:17:46.203341 kernel: hv_netvsc 00224877-dc5a-0022-4877-dc5a00224877 eth0: Data path switched to VF: enP41908s1 Jul 2 08:17:46.204157 systemd-networkd[1332]: enP41908s1: Link UP Jul 2 08:17:46.204248 systemd-networkd[1332]: eth0: Link UP Jul 2 08:17:46.204251 systemd-networkd[1332]: eth0: Gained carrier Jul 2 08:17:46.204267 systemd-networkd[1332]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:46.207673 systemd-networkd[1332]: enP41908s1: Gained carrier Jul 2 08:17:46.214354 systemd-networkd[1332]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 08:17:46.273483 systemd[1]: Reloading finished in 306 ms. Jul 2 08:17:46.292809 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:17:46.299632 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:17:46.307554 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:17:46.319878 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:17:46.327406 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:17:46.350594 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:17:46.358090 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:17:46.369122 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:17:46.379569 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:17:46.390578 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:17:46.406485 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:17:46.417748 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:17:46.420331 lvm[1586]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:17:46.433073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:17:46.438392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:17:46.448677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:17:46.461618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:17:46.471419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:17:46.474159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:17:46.474544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:17:46.493109 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:17:46.501988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:17:46.502186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:17:46.509006 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:17:46.510876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:17:46.519577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:17:46.529663 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:17:46.529840 augenrules[1607]: No rules Jul 2 08:17:46.538627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:17:46.551052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:17:46.557961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:17:46.565496 systemd-resolved[1590]: Positive Trust Anchors: Jul 2 08:17:46.565513 systemd-resolved[1590]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:17:46.565515 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:17:46.565544 systemd-resolved[1590]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:17:46.575673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:17:46.577587 lvm[1617]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:17:46.586766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:17:46.586968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:17:46.589367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:17:46.597537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:17:46.597695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:17:46.608881 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:17:46.616512 systemd-resolved[1590]: Using system hostname 'ci-3975.1.1-a-7c4c792b73'. Jul 2 08:17:46.619484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:17:46.623615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:17:46.631673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:17:46.640665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:17:46.651673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:17:46.657645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:17:46.657887 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:17:46.664673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:17:46.671855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:17:46.673344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:17:46.680489 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:17:46.680675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:17:46.687706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:17:46.687858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:17:46.697022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:17:46.697887 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:17:46.710119 systemd[1]: Finished ensure-sysext.service. Jul 2 08:17:46.718556 systemd[1]: Reached target network.target - Network. Jul 2 08:17:46.724397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:17:46.731666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:17:46.731718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:17:46.802890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:17:46.810943 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:17:48.010538 systemd-networkd[1332]: enP41908s1: Gained IPv6LL Jul 2 08:17:48.138556 systemd-networkd[1332]: eth0: Gained IPv6LL Jul 2 08:17:48.143388 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:17:48.150964 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:17:53.503100 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:17:53.516258 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:17:53.528492 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:17:53.544359 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:17:53.550966 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:17:53.557035 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:17:53.563866 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:17:53.571132 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:17:53.577443 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:17:53.584571 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:17:53.591545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:17:53.591576 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:17:53.596582 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:17:53.602671 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:17:53.610213 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:17:53.619217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:17:53.625709 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:17:53.631794 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:17:53.637105 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:17:53.642338 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:17:53.642363 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:17:53.651416 systemd[1]: Starting chronyd.service - NTP client/server... Jul 2 08:17:53.660477 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:17:53.673529 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:17:53.681260 (chronyd)[1638]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 2 08:17:53.683551 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:17:53.690599 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:17:53.698570 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:17:53.704804 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:17:53.706463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:17:53.718574 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:17:53.720924 chronyd[1650]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 2 08:17:53.731111 jq[1644]: false Jul 2 08:17:53.732304 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:17:53.748453 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:17:53.757202 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:17:53.763622 chronyd[1650]: Timezone right/UTC failed leap second check, ignoring Jul 2 08:17:53.763858 chronyd[1650]: Loaded seccomp filter (level 2) Jul 2 08:17:53.768574 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:17:53.784253 extend-filesystems[1645]: Found loop4 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found loop5 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found loop6 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found loop7 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda1 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda2 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda3 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found usr Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda4 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda6 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda7 Jul 2 08:17:53.795125 extend-filesystems[1645]: Found sda9 Jul 2 08:17:53.795125 extend-filesystems[1645]: Checking size of /dev/sda9 Jul 2 08:17:53.786518 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:17:53.953414 dbus-daemon[1641]: [system] SELinux support is enabled Jul 2 08:17:54.001794 extend-filesystems[1645]: Old size kept for /dev/sda9 Jul 2 08:17:54.001794 extend-filesystems[1645]: Found sr0 Jul 2 08:17:53.798841 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:17:54.023553 update_engine[1662]: I0702 08:17:53.970054 1662 main.cc:92] Flatcar Update Engine starting Jul 2 08:17:54.023553 update_engine[1662]: I0702 08:17:53.974526 1662 update_check_scheduler.cc:74] Next update check in 2m22s Jul 2 08:17:53.799390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:17:54.042709 jq[1669]: true Jul 2 08:17:53.805512 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:17:53.835389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:17:53.844709 systemd[1]: Started chronyd.service - NTP client/server. Jul 2 08:17:53.861755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:17:53.861949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:17:53.863725 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:17:53.863895 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:17:53.884555 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:17:53.884752 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:17:53.914896 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:17:53.915091 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:17:53.921289 systemd-logind[1658]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:17:53.924470 systemd-logind[1658]: New seat seat0. Jul 2 08:17:53.940862 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:17:53.977343 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:17:53.994139 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:17:54.056040 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:17:54.056553 dbus-daemon[1641]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:17:54.056114 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:17:54.070047 jq[1693]: true Jul 2 08:17:54.088051 coreos-metadata[1640]: Jul 02 08:17:54.084 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 08:17:54.071994 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:17:54.072028 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:17:54.091034 coreos-metadata[1640]: Jul 02 08:17:54.089 INFO Fetch successful Jul 2 08:17:54.091034 coreos-metadata[1640]: Jul 02 08:17:54.090 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 08:17:54.093174 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:17:54.098979 (ntainerd)[1694]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:17:54.101502 coreos-metadata[1640]: Jul 02 08:17:54.101 INFO Fetch successful Jul 2 08:17:54.101502 coreos-metadata[1640]: Jul 02 08:17:54.101 INFO Fetching http://168.63.129.16/machine/8cacc6f1-1b42-4669-8098-0fde432da02f/0cf82518%2D0eab%2D46ce%2D9ccf%2Dc1172a259c88.%5Fci%2D3975.1.1%2Da%2D7c4c792b73?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 08:17:54.104039 tar[1676]: linux-arm64/helm Jul 2 08:17:54.106729 coreos-metadata[1640]: Jul 02 08:17:54.106 INFO Fetch successful Jul 2 08:17:54.106729 coreos-metadata[1640]: Jul 02 08:17:54.106 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 08:17:54.114672 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:17:54.129211 coreos-metadata[1640]: Jul 02 08:17:54.129 INFO Fetch successful Jul 2 08:17:54.215810 bash[1725]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:17:54.218134 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:17:54.237885 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 08:17:54.343806 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1687) Jul 2 08:17:54.364383 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:17:54.383246 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:17:54.436106 locksmithd[1710]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:17:54.625713 sshd_keygen[1663]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:17:54.652953 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:17:54.670786 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:17:54.679924 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 2 08:17:54.697170 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:17:54.697821 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:17:54.719678 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:17:54.737554 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 2 08:17:54.763585 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:17:54.790742 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:17:54.808363 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 08:17:54.815856 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:17:54.888660 containerd[1694]: time="2024-07-02T08:17:54.887466960Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:17:54.905043 tar[1676]: linux-arm64/LICENSE Jul 2 08:17:54.905043 tar[1676]: linux-arm64/README.md Jul 2 08:17:54.917336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:17:54.943261 containerd[1694]: time="2024-07-02T08:17:54.942490120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:17:54.943261 containerd[1694]: time="2024-07-02T08:17:54.942554760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946147 containerd[1694]: time="2024-07-02T08:17:54.946094120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946147 containerd[1694]: time="2024-07-02T08:17:54.946143640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946436 containerd[1694]: time="2024-07-02T08:17:54.946408080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946436 containerd[1694]: time="2024-07-02T08:17:54.946433360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:17:54.946536 containerd[1694]: time="2024-07-02T08:17:54.946513840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946588 containerd[1694]: time="2024-07-02T08:17:54.946567680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946612 containerd[1694]: time="2024-07-02T08:17:54.946585960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946666 containerd[1694]: time="2024-07-02T08:17:54.946647800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946899 containerd[1694]: time="2024-07-02T08:17:54.946874720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.946926 containerd[1694]: time="2024-07-02T08:17:54.946901680Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:17:54.946926 containerd[1694]: time="2024-07-02T08:17:54.946912440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:17:54.947042 containerd[1694]: time="2024-07-02T08:17:54.947017440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:17:54.947042 containerd[1694]: time="2024-07-02T08:17:54.947036880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:17:54.947111 containerd[1694]: time="2024-07-02T08:17:54.947089840Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:17:54.947111 containerd[1694]: time="2024-07-02T08:17:54.947109000Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.962861480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.962914680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.962930200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.962975160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.962992560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963004320Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963021800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963191720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963210040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963229240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963243800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963260880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963279800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963356 containerd[1694]: time="2024-07-02T08:17:54.963294960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963933 containerd[1694]: time="2024-07-02T08:17:54.963883240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963933 containerd[1694]: time="2024-07-02T08:17:54.963933040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963995 containerd[1694]: time="2024-07-02T08:17:54.963952160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963995 containerd[1694]: time="2024-07-02T08:17:54.963968640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.963995 containerd[1694]: time="2024-07-02T08:17:54.963982360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:17:54.965156 containerd[1694]: time="2024-07-02T08:17:54.964122760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:17:54.965156 containerd[1694]: time="2024-07-02T08:17:54.964427120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:17:54.965156 containerd[1694]: time="2024-07-02T08:17:54.964457480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.965156 containerd[1694]: time="2024-07-02T08:17:54.964473520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:17:54.965156 containerd[1694]: time="2024-07-02T08:17:54.964510920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:17:54.966125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.966070480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968404960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968452640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968467480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968482400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968519360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968532920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968549040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968565880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968795880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968832080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968846120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968860040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968876640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.972927 containerd[1694]: time="2024-07-02T08:17:54.968902160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.973386 containerd[1694]: time="2024-07-02T08:17:54.968915480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.973386 containerd[1694]: time="2024-07-02T08:17:54.968928400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.969225960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.969295400Z" level=info msg="Connect containerd service" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.969362280Z" level=info msg="using legacy CRI server" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.969370720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.969475800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.970142280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.973054360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.973106760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.973121240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.973135320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.973382280Z" level=info msg="Start subscribing containerd event" Jul 2 08:17:54.973431 containerd[1694]: time="2024-07-02T08:17:54.973434720Z" level=info msg="Start recovering state" Jul 2 08:17:54.973725 containerd[1694]: time="2024-07-02T08:17:54.973509680Z" level=info msg="Start event monitor" Jul 2 08:17:54.973725 containerd[1694]: time="2024-07-02T08:17:54.973521240Z" level=info msg="Start snapshots syncer" Jul 2 08:17:54.973725 containerd[1694]: time="2024-07-02T08:17:54.973530920Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:17:54.973725 containerd[1694]: time="2024-07-02T08:17:54.973538960Z" level=info msg="Start streaming server" Jul 2 08:17:54.974008 containerd[1694]: time="2024-07-02T08:17:54.973885960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:17:54.974008 containerd[1694]: time="2024-07-02T08:17:54.973938920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:17:54.974008 containerd[1694]: time="2024-07-02T08:17:54.973991720Z" level=info msg="containerd successfully booted in 0.094832s" Jul 2 08:17:54.975497 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:17:54.977968 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:17:54.984035 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:17:54.992344 systemd[1]: Startup finished in 721ms (kernel) + 15.145s (initrd) + 16.298s (userspace) = 32.166s. Jul 2 08:17:55.412592 kubelet[1800]: E0702 08:17:55.412548 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:17:55.415807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:17:55.416147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:17:55.470267 login[1786]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:17:55.476135 login[1787]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:17:55.482461 systemd-logind[1658]: New session 1 of user core. Jul 2 08:17:55.483591 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:17:55.490603 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:17:55.493372 systemd-logind[1658]: New session 2 of user core. Jul 2 08:17:55.501685 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:17:55.508957 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:17:55.512418 (systemd)[1815]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:17:55.690771 systemd[1815]: Queued start job for default target default.target. Jul 2 08:17:55.701355 systemd[1815]: Created slice app.slice - User Application Slice. Jul 2 08:17:55.701386 systemd[1815]: Reached target paths.target - Paths. Jul 2 08:17:55.701399 systemd[1815]: Reached target timers.target - Timers. Jul 2 08:17:55.702665 systemd[1815]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:17:55.715418 systemd[1815]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:17:55.715710 systemd[1815]: Reached target sockets.target - Sockets. Jul 2 08:17:55.715815 systemd[1815]: Reached target basic.target - Basic System. Jul 2 08:17:55.715929 systemd[1815]: Reached target default.target - Main User Target. Jul 2 08:17:55.715959 systemd[1815]: Startup finished in 197ms. Jul 2 08:17:55.716158 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:17:55.717780 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:17:55.718615 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:17:56.731801 waagent[1784]: 2024-07-02T08:17:56.731688Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 2 08:17:56.737724 waagent[1784]: 2024-07-02T08:17:56.737639Z INFO Daemon Daemon OS: flatcar 3975.1.1 Jul 2 08:17:56.742666 waagent[1784]: 2024-07-02T08:17:56.742590Z INFO Daemon Daemon Python: 3.11.9 Jul 2 08:17:56.747523 waagent[1784]: 2024-07-02T08:17:56.747276Z INFO Daemon Daemon Run daemon Jul 2 08:17:56.751783 waagent[1784]: 2024-07-02T08:17:56.751721Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.1.1' Jul 2 08:17:56.761207 waagent[1784]: 2024-07-02T08:17:56.761130Z INFO Daemon Daemon Using waagent for provisioning Jul 2 08:17:56.766842 waagent[1784]: 2024-07-02T08:17:56.766787Z INFO Daemon Daemon Activate resource disk Jul 2 08:17:56.771773 waagent[1784]: 2024-07-02T08:17:56.771706Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 08:17:56.783841 waagent[1784]: 2024-07-02T08:17:56.783759Z INFO Daemon Daemon Found device: None Jul 2 08:17:56.788858 waagent[1784]: 2024-07-02T08:17:56.788789Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 08:17:56.798270 waagent[1784]: 2024-07-02T08:17:56.798204Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 08:17:56.811622 waagent[1784]: 2024-07-02T08:17:56.811553Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 08:17:56.817422 waagent[1784]: 2024-07-02T08:17:56.817361Z INFO Daemon Daemon Running default provisioning handler Jul 2 08:17:56.830391 waagent[1784]: 2024-07-02T08:17:56.830284Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 2 08:17:56.844005 waagent[1784]: 2024-07-02T08:17:56.843936Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 08:17:56.853645 waagent[1784]: 2024-07-02T08:17:56.853575Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 08:17:56.858953 waagent[1784]: 2024-07-02T08:17:56.858890Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 08:17:57.035877 waagent[1784]: 2024-07-02T08:17:57.035143Z INFO Daemon Daemon Successfully mounted dvd Jul 2 08:17:57.084660 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 08:17:57.087351 waagent[1784]: 2024-07-02T08:17:57.086636Z INFO Daemon Daemon Detect protocol endpoint Jul 2 08:17:57.091736 waagent[1784]: 2024-07-02T08:17:57.091663Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 08:17:57.097572 waagent[1784]: 2024-07-02T08:17:57.097507Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 08:17:57.104545 waagent[1784]: 2024-07-02T08:17:57.104476Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 08:17:57.109996 waagent[1784]: 2024-07-02T08:17:57.109932Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 08:17:57.115433 waagent[1784]: 2024-07-02T08:17:57.115369Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 08:17:57.130140 waagent[1784]: 2024-07-02T08:17:57.130091Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 08:17:57.136980 waagent[1784]: 2024-07-02T08:17:57.136946Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 08:17:57.142186 waagent[1784]: 2024-07-02T08:17:57.142125Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 08:17:57.543363 waagent[1784]: 2024-07-02T08:17:57.543046Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 08:17:57.549883 waagent[1784]: 2024-07-02T08:17:57.549809Z INFO Daemon Daemon Forcing an update of the goal state. Jul 2 08:17:57.559312 waagent[1784]: 2024-07-02T08:17:57.559248Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 08:17:57.607386 waagent[1784]: 2024-07-02T08:17:57.607335Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jul 2 08:17:57.613808 waagent[1784]: 2024-07-02T08:17:57.613756Z INFO Daemon Jul 2 08:17:57.616918 waagent[1784]: 2024-07-02T08:17:57.616860Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 1586d7cd-d2d3-49f7-beb4-dd372c04d4b3 eTag: 10370916176644040679 source: Fabric] Jul 2 08:17:57.629115 waagent[1784]: 2024-07-02T08:17:57.629062Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 2 08:17:57.636149 waagent[1784]: 2024-07-02T08:17:57.636098Z INFO Daemon Jul 2 08:17:57.639053 waagent[1784]: 2024-07-02T08:17:57.638996Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 2 08:17:57.651596 waagent[1784]: 2024-07-02T08:17:57.651556Z INFO Daemon Daemon Downloading artifacts profile blob Jul 2 08:17:57.752338 waagent[1784]: 2024-07-02T08:17:57.752217Z INFO Daemon Downloaded certificate {'thumbprint': '9CD091C89D1FFE5B2A2B414274A4E463E76F177F', 'hasPrivateKey': True} Jul 2 08:17:57.762601 waagent[1784]: 2024-07-02T08:17:57.762546Z INFO Daemon Downloaded certificate {'thumbprint': '3B9BEFDC13C3BE5E6D46F488456BC945B5ECA948', 'hasPrivateKey': False} Jul 2 08:17:57.777996 waagent[1784]: 2024-07-02T08:17:57.777932Z INFO Daemon Fetch goal state completed Jul 2 08:17:57.789425 waagent[1784]: 2024-07-02T08:17:57.789351Z INFO Daemon Daemon Starting provisioning Jul 2 08:17:57.794518 waagent[1784]: 2024-07-02T08:17:57.794414Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 08:17:57.799389 waagent[1784]: 2024-07-02T08:17:57.799330Z INFO Daemon Daemon Set hostname [ci-3975.1.1-a-7c4c792b73] Jul 2 08:17:57.812340 waagent[1784]: 2024-07-02T08:17:57.810786Z INFO Daemon Daemon Publish hostname [ci-3975.1.1-a-7c4c792b73] Jul 2 08:17:57.817861 waagent[1784]: 2024-07-02T08:17:57.817785Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 08:17:57.824240 waagent[1784]: 2024-07-02T08:17:57.824174Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 08:17:57.841743 systemd-networkd[1332]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:17:57.841751 systemd-networkd[1332]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:17:57.841799 systemd-networkd[1332]: eth0: DHCP lease lost Jul 2 08:17:57.843342 waagent[1784]: 2024-07-02T08:17:57.843214Z INFO Daemon Daemon Create user account if not exists Jul 2 08:17:57.849500 waagent[1784]: 2024-07-02T08:17:57.849427Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 08:17:57.855368 systemd-networkd[1332]: eth0: DHCPv6 lease lost Jul 2 08:17:57.855976 waagent[1784]: 2024-07-02T08:17:57.855897Z INFO Daemon Daemon Configure sudoer Jul 2 08:17:57.861155 waagent[1784]: 2024-07-02T08:17:57.861082Z INFO Daemon Daemon Configure sshd Jul 2 08:17:57.866272 waagent[1784]: 2024-07-02T08:17:57.866189Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 2 08:17:57.879790 waagent[1784]: 2024-07-02T08:17:57.879696Z INFO Daemon Daemon Deploy ssh public key. Jul 2 08:17:57.899397 systemd-networkd[1332]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 08:17:59.100743 waagent[1784]: 2024-07-02T08:17:59.100667Z INFO Daemon Daemon Provisioning complete Jul 2 08:17:59.121489 waagent[1784]: 2024-07-02T08:17:59.121428Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 08:17:59.128045 waagent[1784]: 2024-07-02T08:17:59.127975Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 08:17:59.138631 waagent[1784]: 2024-07-02T08:17:59.138560Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 2 08:17:59.287830 waagent[1864]: 2024-07-02T08:17:59.287738Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 2 08:17:59.288126 waagent[1864]: 2024-07-02T08:17:59.287899Z INFO ExtHandler ExtHandler OS: flatcar 3975.1.1 Jul 2 08:17:59.288126 waagent[1864]: 2024-07-02T08:17:59.287953Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 2 08:17:59.817344 waagent[1864]: 2024-07-02T08:17:59.817234Z INFO ExtHandler ExtHandler Distro: flatcar-3975.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 08:17:59.817553 waagent[1864]: 2024-07-02T08:17:59.817508Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:17:59.817617 waagent[1864]: 2024-07-02T08:17:59.817586Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:17:59.826462 waagent[1864]: 2024-07-02T08:17:59.826376Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 08:17:59.833086 waagent[1864]: 2024-07-02T08:17:59.833040Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 08:17:59.833672 waagent[1864]: 2024-07-02T08:17:59.833624Z INFO ExtHandler Jul 2 08:17:59.833747 waagent[1864]: 2024-07-02T08:17:59.833715Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c1c15419-5ae4-4c76-a405-a897eac56b09 eTag: 10370916176644040679 source: Fabric] Jul 2 08:17:59.834048 waagent[1864]: 2024-07-02T08:17:59.834007Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 08:17:59.889819 waagent[1864]: 2024-07-02T08:17:59.889722Z INFO ExtHandler Jul 2 08:17:59.889918 waagent[1864]: 2024-07-02T08:17:59.889884Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 08:17:59.894498 waagent[1864]: 2024-07-02T08:17:59.894458Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 08:17:59.990347 waagent[1864]: 2024-07-02T08:17:59.989869Z INFO ExtHandler Downloaded certificate {'thumbprint': '9CD091C89D1FFE5B2A2B414274A4E463E76F177F', 'hasPrivateKey': True} Jul 2 08:17:59.990501 waagent[1864]: 2024-07-02T08:17:59.990439Z INFO ExtHandler Downloaded certificate {'thumbprint': '3B9BEFDC13C3BE5E6D46F488456BC945B5ECA948', 'hasPrivateKey': False} Jul 2 08:17:59.990970 waagent[1864]: 2024-07-02T08:17:59.990919Z INFO ExtHandler Fetch goal state completed Jul 2 08:18:00.008977 waagent[1864]: 2024-07-02T08:18:00.008897Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1864 Jul 2 08:18:00.009157 waagent[1864]: 2024-07-02T08:18:00.009118Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 2 08:18:00.011070 waagent[1864]: 2024-07-02T08:18:00.010997Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 08:18:00.011525 waagent[1864]: 2024-07-02T08:18:00.011478Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 08:18:00.019021 waagent[1864]: 2024-07-02T08:18:00.018970Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 08:18:00.019229 waagent[1864]: 2024-07-02T08:18:00.019183Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 08:18:00.026359 waagent[1864]: 2024-07-02T08:18:00.026050Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 08:18:00.033914 systemd[1]: Reloading requested from client PID 1879 ('systemctl') (unit waagent.service)... Jul 2 08:18:00.033934 systemd[1]: Reloading... Jul 2 08:18:00.120549 zram_generator::config[1911]: No configuration found. Jul 2 08:18:00.232693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:18:00.312968 systemd[1]: Reloading finished in 278 ms. Jul 2 08:18:00.339477 waagent[1864]: 2024-07-02T08:18:00.336733Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 2 08:18:00.343898 systemd[1]: Reloading requested from client PID 1964 ('systemctl') (unit waagent.service)... Jul 2 08:18:00.343915 systemd[1]: Reloading... Jul 2 08:18:00.426366 zram_generator::config[1995]: No configuration found. Jul 2 08:18:00.538512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:18:00.618980 systemd[1]: Reloading finished in 274 ms. Jul 2 08:18:00.640589 waagent[1864]: 2024-07-02T08:18:00.639750Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 2 08:18:00.640589 waagent[1864]: 2024-07-02T08:18:00.639923Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 2 08:18:01.119348 waagent[1864]: 2024-07-02T08:18:01.118638Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 08:18:01.119348 waagent[1864]: 2024-07-02T08:18:01.119277Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 08:18:01.120214 waagent[1864]: 2024-07-02T08:18:01.120148Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 08:18:01.120735 waagent[1864]: 2024-07-02T08:18:01.120625Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 08:18:01.121409 waagent[1864]: 2024-07-02T08:18:01.121001Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:18:01.121409 waagent[1864]: 2024-07-02T08:18:01.121103Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:18:01.121409 waagent[1864]: 2024-07-02T08:18:01.121237Z INFO EnvHandler ExtHandler Configure routes Jul 2 08:18:01.121409 waagent[1864]: 2024-07-02T08:18:01.121298Z INFO EnvHandler ExtHandler Gateway:None Jul 2 08:18:01.121534 waagent[1864]: 2024-07-02T08:18:01.121465Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:18:01.121584 waagent[1864]: 2024-07-02T08:18:01.121547Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:18:01.121804 waagent[1864]: 2024-07-02T08:18:01.121754Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 08:18:01.122076 waagent[1864]: 2024-07-02T08:18:01.121918Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 08:18:01.122139 waagent[1864]: 2024-07-02T08:18:01.122098Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 08:18:01.122244 waagent[1864]: 2024-07-02T08:18:01.122189Z INFO EnvHandler ExtHandler Routes:None Jul 2 08:18:01.122665 waagent[1864]: 2024-07-02T08:18:01.122612Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 08:18:01.122877 waagent[1864]: 2024-07-02T08:18:01.122805Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 08:18:01.122953 waagent[1864]: 2024-07-02T08:18:01.122875Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 08:18:01.122953 waagent[1864]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 08:18:01.122953 waagent[1864]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 08:18:01.122953 waagent[1864]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 08:18:01.122953 waagent[1864]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:18:01.122953 waagent[1864]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:18:01.122953 waagent[1864]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:18:01.124428 waagent[1864]: 2024-07-02T08:18:01.124337Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 08:18:01.130937 waagent[1864]: 2024-07-02T08:18:01.130832Z INFO ExtHandler ExtHandler Jul 2 08:18:01.131259 waagent[1864]: 2024-07-02T08:18:01.131002Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5cf9e85a-b0bf-4177-8e07-708f985e2edf correlation bb79547c-3be0-48e1-a6a2-d36a25cb4fb7 created: 2024-07-02T08:16:07.317322Z] Jul 2 08:18:01.132224 waagent[1864]: 2024-07-02T08:18:01.132152Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 08:18:01.133960 waagent[1864]: 2024-07-02T08:18:01.133895Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jul 2 08:18:01.185166 waagent[1864]: 2024-07-02T08:18:01.185089Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F49A03AA-E521-4A5A-B88B-71C1A79BE4AC;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 2 08:18:01.203476 waagent[1864]: 2024-07-02T08:18:01.203382Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 08:18:01.203476 waagent[1864]: Executing ['ip', '-a', '-o', 'link']: Jul 2 08:18:01.203476 waagent[1864]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 08:18:01.203476 waagent[1864]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:dc:5a brd ff:ff:ff:ff:ff:ff Jul 2 08:18:01.203476 waagent[1864]: 3: enP41908s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:dc:5a brd ff:ff:ff:ff:ff:ff\ altname enP41908p0s2 Jul 2 08:18:01.203476 waagent[1864]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 08:18:01.203476 waagent[1864]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 08:18:01.203476 waagent[1864]: 2: eth0 inet 10.200.20.44/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 08:18:01.203476 waagent[1864]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 08:18:01.203476 waagent[1864]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 2 08:18:01.203476 waagent[1864]: 2: eth0 inet6 fe80::222:48ff:fe77:dc5a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 08:18:01.203476 waagent[1864]: 3: enP41908s1 inet6 fe80::222:48ff:fe77:dc5a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 08:18:01.315896 waagent[1864]: 2024-07-02T08:18:01.315795Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 2 08:18:01.315896 waagent[1864]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:18:01.315896 waagent[1864]: pkts bytes target prot opt in out source destination Jul 2 08:18:01.315896 waagent[1864]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:18:01.315896 waagent[1864]: pkts bytes target prot opt in out source destination Jul 2 08:18:01.315896 waagent[1864]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:18:01.315896 waagent[1864]: pkts bytes target prot opt in out source destination Jul 2 08:18:01.315896 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 08:18:01.315896 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 08:18:01.315896 waagent[1864]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 08:18:01.319287 waagent[1864]: 2024-07-02T08:18:01.319199Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 08:18:01.319287 waagent[1864]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:18:01.319287 waagent[1864]: pkts bytes target prot opt in out source destination Jul 2 08:18:01.319287 waagent[1864]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:18:01.319287 waagent[1864]: pkts bytes target prot opt in out source destination Jul 2 08:18:01.319287 waagent[1864]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:18:01.319287 waagent[1864]: pkts bytes target prot opt in out source destination Jul 2 08:18:01.319287 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 08:18:01.319287 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 08:18:01.319287 waagent[1864]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 08:18:01.319622 waagent[1864]: 2024-07-02T08:18:01.319553Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 08:18:05.652691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:18:05.664518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:18:05.764522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:18:05.776671 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:18:05.843937 kubelet[2091]: E0702 08:18:05.843875 2091 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:18:05.848165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:18:05.848487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:18:15.902681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:18:15.910528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:18:16.010652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:18:16.015601 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:18:16.099382 kubelet[2107]: E0702 08:18:16.099294 2107 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:18:16.101782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:18:16.101908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:18:17.554922 chronyd[1650]: Selected source PHC0 Jul 2 08:18:26.152747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:18:26.159540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:18:26.255470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:18:26.260485 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:18:26.310326 kubelet[2126]: E0702 08:18:26.310255 2126 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:18:26.313233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:18:26.313562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:18:33.750468 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 2 08:18:36.402724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:18:36.411739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:18:36.515167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:18:36.524694 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:18:36.569137 kubelet[2142]: E0702 08:18:36.569052 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:18:36.572158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:18:36.572500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:18:38.844903 update_engine[1662]: I0702 08:18:38.844347 1662 update_attempter.cc:509] Updating boot flags... Jul 2 08:18:38.912373 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2162) Jul 2 08:18:44.179486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:18:44.180699 systemd[1]: Started sshd@0-10.200.20.44:22-10.200.16.10:40508.service - OpenSSH per-connection server daemon (10.200.16.10:40508). Jul 2 08:18:44.608259 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 40508 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:44.609589 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:44.614066 systemd-logind[1658]: New session 3 of user core. Jul 2 08:18:44.623468 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:18:44.981052 systemd[1]: Started sshd@1-10.200.20.44:22-10.200.16.10:40510.service - OpenSSH per-connection server daemon (10.200.16.10:40510). Jul 2 08:18:45.397080 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 40510 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:45.398517 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:45.402578 systemd-logind[1658]: New session 4 of user core. Jul 2 08:18:45.410479 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:18:45.700686 sshd[2195]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:45.704520 systemd[1]: sshd@1-10.200.20.44:22-10.200.16.10:40510.service: Deactivated successfully. Jul 2 08:18:45.706265 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:18:45.707882 systemd-logind[1658]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:18:45.709058 systemd-logind[1658]: Removed session 4. Jul 2 08:18:45.792772 systemd[1]: Started sshd@2-10.200.20.44:22-10.200.16.10:40512.service - OpenSSH per-connection server daemon (10.200.16.10:40512). Jul 2 08:18:46.232555 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 40512 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:46.234102 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:46.239098 systemd-logind[1658]: New session 5 of user core. Jul 2 08:18:46.246541 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:18:46.563567 sshd[2202]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:46.567506 systemd[1]: sshd@2-10.200.20.44:22-10.200.16.10:40512.service: Deactivated successfully. Jul 2 08:18:46.569082 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:18:46.569907 systemd-logind[1658]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:18:46.570847 systemd-logind[1658]: Removed session 5. Jul 2 08:18:46.640257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 08:18:46.655776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:18:46.657552 systemd[1]: Started sshd@3-10.200.20.44:22-10.200.16.10:40520.service - OpenSSH per-connection server daemon (10.200.16.10:40520). Jul 2 08:18:46.752014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:18:46.764620 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:18:46.809405 kubelet[2219]: E0702 08:18:46.809345 2219 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:18:46.811646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:18:46.811774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:18:47.071413 sshd[2210]: Accepted publickey for core from 10.200.16.10 port 40520 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:47.072813 sshd[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:47.077184 systemd-logind[1658]: New session 6 of user core. Jul 2 08:18:47.089496 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:18:47.372713 sshd[2210]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:47.376948 systemd[1]: sshd@3-10.200.20.44:22-10.200.16.10:40520.service: Deactivated successfully. Jul 2 08:18:47.378450 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:18:47.379678 systemd-logind[1658]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:18:47.380651 systemd-logind[1658]: Removed session 6. Jul 2 08:18:47.459571 systemd[1]: Started sshd@4-10.200.20.44:22-10.200.16.10:40530.service - OpenSSH per-connection server daemon (10.200.16.10:40530). Jul 2 08:18:47.905689 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 40530 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:47.907137 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:47.911951 systemd-logind[1658]: New session 7 of user core. Jul 2 08:18:47.918568 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:18:48.221450 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:18:48.221698 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:18:48.246270 sudo[2235]: pam_unix(sudo:session): session closed for user root Jul 2 08:18:48.328025 sshd[2232]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:48.331179 systemd[1]: sshd@4-10.200.20.44:22-10.200.16.10:40530.service: Deactivated successfully. Jul 2 08:18:48.333043 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:18:48.334445 systemd-logind[1658]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:18:48.335817 systemd-logind[1658]: Removed session 7. Jul 2 08:18:48.419574 systemd[1]: Started sshd@5-10.200.20.44:22-10.200.16.10:40674.service - OpenSSH per-connection server daemon (10.200.16.10:40674). Jul 2 08:18:48.859791 sshd[2240]: Accepted publickey for core from 10.200.16.10 port 40674 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:48.861668 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:48.865526 systemd-logind[1658]: New session 8 of user core. Jul 2 08:18:48.873540 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:18:49.113246 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:18:49.113902 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:18:49.117148 sudo[2244]: pam_unix(sudo:session): session closed for user root Jul 2 08:18:49.122231 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:18:49.122479 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:18:49.133567 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:18:49.137889 auditctl[2247]: No rules Jul 2 08:18:49.138212 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:18:49.138424 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:18:49.142868 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:18:49.168161 augenrules[2265]: No rules Jul 2 08:18:49.169714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:18:49.171070 sudo[2243]: pam_unix(sudo:session): session closed for user root Jul 2 08:18:49.252782 sshd[2240]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:49.256606 systemd[1]: sshd@5-10.200.20.44:22-10.200.16.10:40674.service: Deactivated successfully. Jul 2 08:18:49.258156 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:18:49.259104 systemd-logind[1658]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:18:49.260089 systemd-logind[1658]: Removed session 8. Jul 2 08:18:49.328011 systemd[1]: Started sshd@6-10.200.20.44:22-10.200.16.10:40688.service - OpenSSH per-connection server daemon (10.200.16.10:40688). Jul 2 08:18:49.741125 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 40688 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:18:49.742502 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:49.746243 systemd-logind[1658]: New session 9 of user core. Jul 2 08:18:49.750522 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:18:49.978884 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:18:49.979113 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:18:50.549653 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:18:50.549809 (dockerd)[2285]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:18:51.769388 dockerd[2285]: time="2024-07-02T08:18:51.769195620Z" level=info msg="Starting up" Jul 2 08:18:51.857954 dockerd[2285]: time="2024-07-02T08:18:51.857852699Z" level=info msg="Loading containers: start." Jul 2 08:18:52.185501 kernel: Initializing XFRM netlink socket Jul 2 08:18:52.375458 systemd-networkd[1332]: docker0: Link UP Jul 2 08:18:52.407323 dockerd[2285]: time="2024-07-02T08:18:52.406760605Z" level=info msg="Loading containers: done." Jul 2 08:18:52.901130 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2856352468-merged.mount: Deactivated successfully. Jul 2 08:18:52.912721 dockerd[2285]: time="2024-07-02T08:18:52.912667335Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:18:52.913007 dockerd[2285]: time="2024-07-02T08:18:52.912885455Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:18:52.913007 dockerd[2285]: time="2024-07-02T08:18:52.913001735Z" level=info msg="Daemon has completed initialization" Jul 2 08:18:52.965798 dockerd[2285]: time="2024-07-02T08:18:52.963490448Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:18:52.965386 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:18:53.571594 containerd[1694]: time="2024-07-02T08:18:53.571515566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 08:18:54.341460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753249595.mount: Deactivated successfully. Jul 2 08:18:56.576585 containerd[1694]: time="2024-07-02T08:18:56.576526826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:18:56.579708 containerd[1694]: time="2024-07-02T08:18:56.579501794Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jul 2 08:18:56.584116 containerd[1694]: time="2024-07-02T08:18:56.584066886Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:18:56.589235 containerd[1694]: time="2024-07-02T08:18:56.589163819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:18:56.590359 containerd[1694]: time="2024-07-02T08:18:56.590151222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 3.018592816s" Jul 2 08:18:56.590359 containerd[1694]: time="2024-07-02T08:18:56.590201382Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 08:18:56.611106 containerd[1694]: time="2024-07-02T08:18:56.611055118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 08:18:56.902635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 08:18:56.910547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:18:57.176238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:18:57.184744 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:18:57.226221 kubelet[2481]: E0702 08:18:57.226158 2481 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:18:57.229249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:18:57.229421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:18:59.903360 containerd[1694]: time="2024-07-02T08:18:59.903097914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:18:59.906829 containerd[1694]: time="2024-07-02T08:18:59.906545682Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jul 2 08:18:59.910588 containerd[1694]: time="2024-07-02T08:18:59.910525852Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:18:59.918870 containerd[1694]: time="2024-07-02T08:18:59.918788232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:18:59.919982 containerd[1694]: time="2024-07-02T08:18:59.919814275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 3.308553716s" Jul 2 08:18:59.919982 containerd[1694]: time="2024-07-02T08:18:59.919857155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 08:18:59.943505 containerd[1694]: time="2024-07-02T08:18:59.943462493Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 08:19:01.778559 containerd[1694]: time="2024-07-02T08:19:01.778502136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:01.781645 containerd[1694]: time="2024-07-02T08:19:01.781414143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jul 2 08:19:01.786423 containerd[1694]: time="2024-07-02T08:19:01.786382995Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:01.793121 containerd[1694]: time="2024-07-02T08:19:01.793054011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:01.794540 containerd[1694]: time="2024-07-02T08:19:01.794498815Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.850991882s" Jul 2 08:19:01.794598 containerd[1694]: time="2024-07-02T08:19:01.794543855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 08:19:01.816153 containerd[1694]: time="2024-07-02T08:19:01.815912147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:19:03.312286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21632404.mount: Deactivated successfully. Jul 2 08:19:03.768581 containerd[1694]: time="2024-07-02T08:19:03.768461037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:03.772875 containerd[1694]: time="2024-07-02T08:19:03.772576687Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jul 2 08:19:03.775759 containerd[1694]: time="2024-07-02T08:19:03.775706095Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:03.783322 containerd[1694]: time="2024-07-02T08:19:03.783115833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:03.784940 containerd[1694]: time="2024-07-02T08:19:03.784645637Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 1.96868953s" Jul 2 08:19:03.784940 containerd[1694]: time="2024-07-02T08:19:03.784722957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 08:19:03.828860 containerd[1694]: time="2024-07-02T08:19:03.828731425Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:19:04.868847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244173464.mount: Deactivated successfully. Jul 2 08:19:05.941072 containerd[1694]: time="2024-07-02T08:19:05.941008540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:05.943616 containerd[1694]: time="2024-07-02T08:19:05.943557620Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 08:19:05.946433 containerd[1694]: time="2024-07-02T08:19:05.946377021Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:05.953271 containerd[1694]: time="2024-07-02T08:19:05.953205943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:05.954630 containerd[1694]: time="2024-07-02T08:19:05.954501584Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.125701199s" Jul 2 08:19:05.954630 containerd[1694]: time="2024-07-02T08:19:05.954544504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 08:19:05.976781 containerd[1694]: time="2024-07-02T08:19:05.976737110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:19:06.597761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178371807.mount: Deactivated successfully. Jul 2 08:19:06.625361 containerd[1694]: time="2024-07-02T08:19:06.624649185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:06.626868 containerd[1694]: time="2024-07-02T08:19:06.626830706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:19:06.631285 containerd[1694]: time="2024-07-02T08:19:06.631255387Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:06.636580 containerd[1694]: time="2024-07-02T08:19:06.636536909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:06.637230 containerd[1694]: time="2024-07-02T08:19:06.637190469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 660.409079ms" Jul 2 08:19:06.637230 containerd[1694]: time="2024-07-02T08:19:06.637227789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:19:06.657920 containerd[1694]: time="2024-07-02T08:19:06.657874275Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 08:19:07.402644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 2 08:19:07.409567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:19:07.440841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287030439.mount: Deactivated successfully. Jul 2 08:19:07.569892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:19:07.588197 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:19:07.648555 kubelet[2588]: E0702 08:19:07.648401 2588 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:19:07.652401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:19:07.652709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:19:10.139886 containerd[1694]: time="2024-07-02T08:19:10.139822302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:10.142489 containerd[1694]: time="2024-07-02T08:19:10.142432668Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jul 2 08:19:10.147856 containerd[1694]: time="2024-07-02T08:19:10.147797202Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:10.153799 containerd[1694]: time="2024-07-02T08:19:10.153732496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:10.155362 containerd[1694]: time="2024-07-02T08:19:10.154943139Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.497025184s" Jul 2 08:19:10.155362 containerd[1694]: time="2024-07-02T08:19:10.154984939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 08:19:15.542375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:19:15.551580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:19:15.575618 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-9.scope)... Jul 2 08:19:15.575643 systemd[1]: Reloading... Jul 2 08:19:15.685481 zram_generator::config[2741]: No configuration found. Jul 2 08:19:15.787214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:19:15.865499 systemd[1]: Reloading finished in 289 ms. Jul 2 08:19:15.915142 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:19:15.920116 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:19:15.920381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:19:15.924574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:19:16.069192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:19:16.085671 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:19:16.134062 kubelet[2809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:19:16.134062 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:19:16.134062 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:19:16.134585 kubelet[2809]: I0702 08:19:16.134020 2809 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:19:16.881352 kubelet[2809]: I0702 08:19:16.881091 2809 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:19:16.881352 kubelet[2809]: I0702 08:19:16.881119 2809 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:19:16.881566 kubelet[2809]: I0702 08:19:16.881382 2809 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:19:16.894025 kubelet[2809]: E0702 08:19:16.893984 2809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.894560 kubelet[2809]: I0702 08:19:16.894331 2809 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:19:16.905935 kubelet[2809]: I0702 08:19:16.905906 2809 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:19:16.907269 kubelet[2809]: I0702 08:19:16.907232 2809 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:19:16.907867 kubelet[2809]: I0702 08:19:16.907401 2809 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-a-7c4c792b73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:19:16.907867 kubelet[2809]: I0702 08:19:16.907609 2809 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:19:16.907867 kubelet[2809]: I0702 08:19:16.907620 2809 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:19:16.907867 kubelet[2809]: I0702 08:19:16.907749 2809 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:19:16.908579 kubelet[2809]: I0702 08:19:16.908565 2809 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:19:16.908649 kubelet[2809]: I0702 08:19:16.908640 2809 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:19:16.908740 kubelet[2809]: I0702 08:19:16.908730 2809 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:19:16.908799 kubelet[2809]: I0702 08:19:16.908790 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:19:16.909517 kubelet[2809]: W0702 08:19:16.909467 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.909697 kubelet[2809]: E0702 08:19:16.909617 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.909802 kubelet[2809]: W0702 08:19:16.909772 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7c4c792b73&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.909890 kubelet[2809]: E0702 08:19:16.909871 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7c4c792b73&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.911512 kubelet[2809]: I0702 08:19:16.910261 2809 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:19:16.911512 kubelet[2809]: I0702 08:19:16.910468 2809 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:19:16.911512 kubelet[2809]: W0702 08:19:16.910512 2809 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:19:16.911512 kubelet[2809]: I0702 08:19:16.911265 2809 server.go:1264] "Started kubelet" Jul 2 08:19:16.914996 kubelet[2809]: I0702 08:19:16.914968 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:19:16.916732 kubelet[2809]: E0702 08:19:16.916581 2809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-a-7c4c792b73.17de5789c3eba5d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-7c4c792b73,UID:ci-3975.1.1-a-7c4c792b73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-7c4c792b73,},FirstTimestamp:2024-07-02 08:19:16.911244756 +0000 UTC m=+0.821487547,LastTimestamp:2024-07-02 08:19:16.911244756 +0000 UTC m=+0.821487547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-7c4c792b73,}" Jul 2 08:19:16.917955 kubelet[2809]: I0702 08:19:16.917899 2809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:19:16.918858 kubelet[2809]: I0702 08:19:16.918816 2809 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:19:16.919759 kubelet[2809]: I0702 08:19:16.919681 2809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:19:16.919947 kubelet[2809]: I0702 08:19:16.919922 2809 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:19:16.921563 kubelet[2809]: I0702 08:19:16.921530 2809 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:19:16.921659 kubelet[2809]: I0702 08:19:16.921638 2809 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:19:16.922759 kubelet[2809]: I0702 08:19:16.922715 2809 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:19:16.923486 kubelet[2809]: W0702 08:19:16.923432 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.923552 kubelet[2809]: E0702 08:19:16.923490 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.923590 kubelet[2809]: E0702 08:19:16.923545 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="200ms" Jul 2 08:19:16.925338 kubelet[2809]: I0702 08:19:16.925275 2809 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:19:16.925338 kubelet[2809]: I0702 08:19:16.925303 2809 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:19:16.925451 kubelet[2809]: I0702 08:19:16.925419 2809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:19:16.943823 kubelet[2809]: E0702 08:19:16.941666 2809 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:19:16.947975 kubelet[2809]: I0702 08:19:16.947626 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:19:16.948845 kubelet[2809]: I0702 08:19:16.948824 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:19:16.948970 kubelet[2809]: I0702 08:19:16.948961 2809 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:19:16.949050 kubelet[2809]: I0702 08:19:16.949042 2809 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:19:16.949162 kubelet[2809]: E0702 08:19:16.949145 2809 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:19:16.952215 kubelet[2809]: W0702 08:19:16.952143 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:16.952215 kubelet[2809]: E0702 08:19:16.952181 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:17.049412 kubelet[2809]: E0702 08:19:17.049377 2809 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:19:17.054335 kubelet[2809]: I0702 08:19:17.053943 2809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.054335 kubelet[2809]: E0702 08:19:17.054269 2809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.054546 kubelet[2809]: I0702 08:19:17.054521 2809 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:19:17.054546 kubelet[2809]: I0702 08:19:17.054540 2809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:19:17.054618 kubelet[2809]: I0702 08:19:17.054559 2809 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:19:17.061283 kubelet[2809]: I0702 08:19:17.061253 2809 policy_none.go:49] "None policy: Start" Jul 2 08:19:17.062001 kubelet[2809]: I0702 08:19:17.061970 2809 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:19:17.062001 kubelet[2809]: I0702 08:19:17.062006 2809 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:19:17.074874 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:19:17.089656 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:19:17.093151 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:19:17.100779 kubelet[2809]: I0702 08:19:17.100175 2809 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:19:17.100779 kubelet[2809]: I0702 08:19:17.100417 2809 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:19:17.100779 kubelet[2809]: I0702 08:19:17.100526 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:19:17.103212 kubelet[2809]: E0702 08:19:17.103180 2809 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-a-7c4c792b73\" not found" Jul 2 08:19:17.125029 kubelet[2809]: E0702 08:19:17.124974 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="400ms" Jul 2 08:19:17.250330 kubelet[2809]: I0702 08:19:17.250218 2809 topology_manager.go:215] "Topology Admit Handler" podUID="ab4ece8e0b5322870d85ae9e25fa1d22" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.252783 kubelet[2809]: I0702 08:19:17.252669 2809 topology_manager.go:215] "Topology Admit Handler" podUID="6e5cc21568e1634d01668ab1306913a8" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.255125 kubelet[2809]: I0702 08:19:17.254972 2809 topology_manager.go:215] "Topology Admit Handler" podUID="635c83fc299791663b142dff1b4931ba" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.257380 kubelet[2809]: I0702 08:19:17.257295 2809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.258238 kubelet[2809]: E0702 08:19:17.257909 2809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.263898 systemd[1]: Created slice kubepods-burstable-podab4ece8e0b5322870d85ae9e25fa1d22.slice - libcontainer container kubepods-burstable-podab4ece8e0b5322870d85ae9e25fa1d22.slice. Jul 2 08:19:17.280412 systemd[1]: Created slice kubepods-burstable-pod6e5cc21568e1634d01668ab1306913a8.slice - libcontainer container kubepods-burstable-pod6e5cc21568e1634d01668ab1306913a8.slice. Jul 2 08:19:17.294518 systemd[1]: Created slice kubepods-burstable-pod635c83fc299791663b142dff1b4931ba.slice - libcontainer container kubepods-burstable-pod635c83fc299791663b142dff1b4931ba.slice. Jul 2 08:19:17.324611 kubelet[2809]: I0702 08:19:17.324547 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324611 kubelet[2809]: I0702 08:19:17.324608 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324787 kubelet[2809]: I0702 08:19:17.324627 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324787 kubelet[2809]: I0702 08:19:17.324656 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324787 kubelet[2809]: I0702 08:19:17.324676 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324787 kubelet[2809]: I0702 08:19:17.324696 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/635c83fc299791663b142dff1b4931ba-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-7c4c792b73\" (UID: \"635c83fc299791663b142dff1b4931ba\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324787 kubelet[2809]: I0702 08:19:17.324711 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab4ece8e0b5322870d85ae9e25fa1d22-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" (UID: \"ab4ece8e0b5322870d85ae9e25fa1d22\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324904 kubelet[2809]: I0702 08:19:17.324726 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab4ece8e0b5322870d85ae9e25fa1d22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" (UID: \"ab4ece8e0b5322870d85ae9e25fa1d22\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.324904 kubelet[2809]: I0702 08:19:17.324742 2809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab4ece8e0b5322870d85ae9e25fa1d22-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" (UID: \"ab4ece8e0b5322870d85ae9e25fa1d22\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.525890 kubelet[2809]: E0702 08:19:17.525724 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="800ms" Jul 2 08:19:17.578832 containerd[1694]: time="2024-07-02T08:19:17.578780033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-7c4c792b73,Uid:ab4ece8e0b5322870d85ae9e25fa1d22,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:17.593175 containerd[1694]: time="2024-07-02T08:19:17.593130557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-7c4c792b73,Uid:6e5cc21568e1634d01668ab1306913a8,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:17.597129 containerd[1694]: time="2024-07-02T08:19:17.597059449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-7c4c792b73,Uid:635c83fc299791663b142dff1b4931ba,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:17.660583 kubelet[2809]: I0702 08:19:17.660537 2809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:17.661553 kubelet[2809]: E0702 08:19:17.661511 2809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:18.146673 kubelet[2809]: W0702 08:19:18.146624 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.146673 kubelet[2809]: E0702 08:19:18.146675 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.304892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165853200.mount: Deactivated successfully. Jul 2 08:19:18.326869 kubelet[2809]: E0702 08:19:18.326820 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="1.6s" Jul 2 08:19:18.342363 containerd[1694]: time="2024-07-02T08:19:18.341747121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:19:18.345335 containerd[1694]: time="2024-07-02T08:19:18.345279132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:19:18.346717 kubelet[2809]: W0702 08:19:18.346650 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7c4c792b73&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.346808 kubelet[2809]: E0702 08:19:18.346722 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-7c4c792b73&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.352353 containerd[1694]: time="2024-07-02T08:19:18.351742632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:19:18.354971 containerd[1694]: time="2024-07-02T08:19:18.354196999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:19:18.358713 containerd[1694]: time="2024-07-02T08:19:18.358668533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:19:18.362954 containerd[1694]: time="2024-07-02T08:19:18.362263384Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:19:18.365631 containerd[1694]: time="2024-07-02T08:19:18.365565154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:19:18.370213 containerd[1694]: time="2024-07-02T08:19:18.370154488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:19:18.370940 kubelet[2809]: W0702 08:19:18.370851 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.370940 kubelet[2809]: E0702 08:19:18.370914 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.371991 containerd[1694]: time="2024-07-02T08:19:18.371098851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 777.865894ms" Jul 2 08:19:18.373630 containerd[1694]: time="2024-07-02T08:19:18.373592018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 776.426849ms" Jul 2 08:19:18.374291 containerd[1694]: time="2024-07-02T08:19:18.374243180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 795.364267ms" Jul 2 08:19:18.442049 kubelet[2809]: W0702 08:19:18.441919 2809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.442049 kubelet[2809]: E0702 08:19:18.441987 2809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:18.463552 kubelet[2809]: I0702 08:19:18.463503 2809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:18.463853 kubelet[2809]: E0702 08:19:18.463823 2809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:18.620439 containerd[1694]: time="2024-07-02T08:19:18.620018650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:18.620439 containerd[1694]: time="2024-07-02T08:19:18.620174931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:18.620439 containerd[1694]: time="2024-07-02T08:19:18.620204131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:18.620439 containerd[1694]: time="2024-07-02T08:19:18.620219411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:18.622430 containerd[1694]: time="2024-07-02T08:19:18.622015176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:18.622430 containerd[1694]: time="2024-07-02T08:19:18.622082096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:18.622430 containerd[1694]: time="2024-07-02T08:19:18.622109217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:18.622430 containerd[1694]: time="2024-07-02T08:19:18.622124337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:18.630473 containerd[1694]: time="2024-07-02T08:19:18.630203121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:18.630473 containerd[1694]: time="2024-07-02T08:19:18.630359282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:18.630627 containerd[1694]: time="2024-07-02T08:19:18.630486002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:18.630934 containerd[1694]: time="2024-07-02T08:19:18.630890003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:18.647006 systemd[1]: Started cri-containerd-7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a.scope - libcontainer container 7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a. Jul 2 08:19:18.651016 systemd[1]: Started cri-containerd-e8f38ce1710dc1659ef3c37cf9e186192c4423ec3e863161490588e2eb413aec.scope - libcontainer container e8f38ce1710dc1659ef3c37cf9e186192c4423ec3e863161490588e2eb413aec. Jul 2 08:19:18.657469 systemd[1]: Started cri-containerd-a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7.scope - libcontainer container a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7. Jul 2 08:19:18.707748 containerd[1694]: time="2024-07-02T08:19:18.706903315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-7c4c792b73,Uid:6e5cc21568e1634d01668ab1306913a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7\"" Jul 2 08:19:18.710697 containerd[1694]: time="2024-07-02T08:19:18.710629567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-7c4c792b73,Uid:635c83fc299791663b142dff1b4931ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a\"" Jul 2 08:19:18.711820 containerd[1694]: time="2024-07-02T08:19:18.711671050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-7c4c792b73,Uid:ab4ece8e0b5322870d85ae9e25fa1d22,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8f38ce1710dc1659ef3c37cf9e186192c4423ec3e863161490588e2eb413aec\"" Jul 2 08:19:18.715082 containerd[1694]: time="2024-07-02T08:19:18.714930780Z" level=info msg="CreateContainer within sandbox \"a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:19:18.717868 containerd[1694]: time="2024-07-02T08:19:18.717834029Z" level=info msg="CreateContainer within sandbox \"7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:19:18.719159 containerd[1694]: time="2024-07-02T08:19:18.718640271Z" level=info msg="CreateContainer within sandbox \"e8f38ce1710dc1659ef3c37cf9e186192c4423ec3e863161490588e2eb413aec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:19:18.766999 containerd[1694]: time="2024-07-02T08:19:18.766940418Z" level=info msg="CreateContainer within sandbox \"a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943\"" Jul 2 08:19:18.767776 containerd[1694]: time="2024-07-02T08:19:18.767746301Z" level=info msg="StartContainer for \"1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943\"" Jul 2 08:19:18.785099 containerd[1694]: time="2024-07-02T08:19:18.784951633Z" level=info msg="CreateContainer within sandbox \"e8f38ce1710dc1659ef3c37cf9e186192c4423ec3e863161490588e2eb413aec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9e0461c5fd8480f1eea4ee15e4470db8ba87f85b9582a6c668fcc07de14c822\"" Jul 2 08:19:18.786799 containerd[1694]: time="2024-07-02T08:19:18.785665636Z" level=info msg="StartContainer for \"f9e0461c5fd8480f1eea4ee15e4470db8ba87f85b9582a6c668fcc07de14c822\"" Jul 2 08:19:18.795711 containerd[1694]: time="2024-07-02T08:19:18.795665946Z" level=info msg="CreateContainer within sandbox \"7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888\"" Jul 2 08:19:18.796390 systemd[1]: Started cri-containerd-1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943.scope - libcontainer container 1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943. Jul 2 08:19:18.796805 containerd[1694]: time="2024-07-02T08:19:18.796768270Z" level=info msg="StartContainer for \"8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888\"" Jul 2 08:19:18.831564 systemd[1]: Started cri-containerd-f9e0461c5fd8480f1eea4ee15e4470db8ba87f85b9582a6c668fcc07de14c822.scope - libcontainer container f9e0461c5fd8480f1eea4ee15e4470db8ba87f85b9582a6c668fcc07de14c822. Jul 2 08:19:18.836017 systemd[1]: Started cri-containerd-8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888.scope - libcontainer container 8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888. Jul 2 08:19:18.857939 containerd[1694]: time="2024-07-02T08:19:18.857885016Z" level=info msg="StartContainer for \"1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943\" returns successfully" Jul 2 08:19:18.901370 containerd[1694]: time="2024-07-02T08:19:18.901325389Z" level=info msg="StartContainer for \"8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888\" returns successfully" Jul 2 08:19:18.908156 containerd[1694]: time="2024-07-02T08:19:18.908107649Z" level=info msg="StartContainer for \"f9e0461c5fd8480f1eea4ee15e4470db8ba87f85b9582a6c668fcc07de14c822\" returns successfully" Jul 2 08:19:19.009468 kubelet[2809]: E0702 08:19:19.008274 2809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.44:6443: connect: connection refused Jul 2 08:19:20.067401 kubelet[2809]: I0702 08:19:20.066230 2809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:21.002757 kubelet[2809]: E0702 08:19:21.002704 2809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-a-7c4c792b73\" not found" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:21.058325 kubelet[2809]: I0702 08:19:21.056643 2809 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:21.108082 kubelet[2809]: E0702 08:19:21.108039 2809 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-a-7c4c792b73\" not found" Jul 2 08:19:21.208454 kubelet[2809]: E0702 08:19:21.208412 2809 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-a-7c4c792b73\" not found" Jul 2 08:19:21.911157 kubelet[2809]: I0702 08:19:21.911100 2809 apiserver.go:52] "Watching apiserver" Jul 2 08:19:21.922727 kubelet[2809]: I0702 08:19:21.922686 2809 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:19:22.318687 kubelet[2809]: W0702 08:19:22.318420 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:22.417900 kubelet[2809]: W0702 08:19:22.417854 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:23.048737 systemd[1]: Reloading requested from client PID 3088 ('systemctl') (unit session-9.scope)... Jul 2 08:19:23.048757 systemd[1]: Reloading... Jul 2 08:19:23.137560 zram_generator::config[3123]: No configuration found. Jul 2 08:19:23.250868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:19:23.340590 systemd[1]: Reloading finished in 291 ms. Jul 2 08:19:23.375969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:19:23.391437 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:19:23.391669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:19:23.391732 systemd[1]: kubelet.service: Consumed 1.180s CPU time, 115.6M memory peak, 0B memory swap peak. Jul 2 08:19:23.398582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:19:23.500568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:19:23.507260 (kubelet)[3189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:19:23.562409 kubelet[3189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:19:23.563770 kubelet[3189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:19:23.563770 kubelet[3189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:19:23.563770 kubelet[3189]: I0702 08:19:23.562499 3189 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:19:23.567935 kubelet[3189]: I0702 08:19:23.567896 3189 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:19:23.567935 kubelet[3189]: I0702 08:19:23.567928 3189 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:19:23.568163 kubelet[3189]: I0702 08:19:23.568144 3189 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:19:23.569577 kubelet[3189]: I0702 08:19:23.569554 3189 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:19:23.571012 kubelet[3189]: I0702 08:19:23.570795 3189 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:19:23.577471 kubelet[3189]: I0702 08:19:23.577443 3189 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:19:23.578007 kubelet[3189]: I0702 08:19:23.577977 3189 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:19:23.578341 kubelet[3189]: I0702 08:19:23.578113 3189 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-a-7c4c792b73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:19:23.578512 kubelet[3189]: I0702 08:19:23.578465 3189 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:19:23.578569 kubelet[3189]: I0702 08:19:23.578561 3189 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:19:23.578657 kubelet[3189]: I0702 08:19:23.578649 3189 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:19:23.579159 kubelet[3189]: I0702 08:19:23.578880 3189 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:19:23.579159 kubelet[3189]: I0702 08:19:23.578909 3189 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:19:23.579159 kubelet[3189]: I0702 08:19:23.578938 3189 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:19:23.579159 kubelet[3189]: I0702 08:19:23.578955 3189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:19:23.588133 kubelet[3189]: I0702 08:19:23.588065 3189 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:19:23.589247 kubelet[3189]: I0702 08:19:23.589211 3189 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:19:23.589809 kubelet[3189]: I0702 08:19:23.589787 3189 server.go:1264] "Started kubelet" Jul 2 08:19:23.593635 kubelet[3189]: I0702 08:19:23.593249 3189 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:19:23.596397 kubelet[3189]: I0702 08:19:23.596374 3189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:19:23.603676 kubelet[3189]: I0702 08:19:23.602991 3189 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:19:23.604213 kubelet[3189]: I0702 08:19:23.593564 3189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:19:23.604554 kubelet[3189]: I0702 08:19:23.604538 3189 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:19:23.605977 kubelet[3189]: I0702 08:19:23.605961 3189 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:19:23.612933 kubelet[3189]: I0702 08:19:23.612884 3189 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:19:23.613375 kubelet[3189]: I0702 08:19:23.613362 3189 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:19:23.616209 kubelet[3189]: I0702 08:19:23.616176 3189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:19:23.617635 kubelet[3189]: I0702 08:19:23.617612 3189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:19:23.618212 kubelet[3189]: I0702 08:19:23.617789 3189 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:19:23.618212 kubelet[3189]: I0702 08:19:23.617814 3189 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:19:23.618212 kubelet[3189]: E0702 08:19:23.617854 3189 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:19:23.625949 kubelet[3189]: I0702 08:19:23.625892 3189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:19:23.634579 kubelet[3189]: I0702 08:19:23.634532 3189 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:19:23.634579 kubelet[3189]: I0702 08:19:23.634561 3189 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:19:23.687630 kubelet[3189]: I0702 08:19:23.687606 3189 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:19:23.688074 kubelet[3189]: I0702 08:19:23.687820 3189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:19:23.688074 kubelet[3189]: I0702 08:19:23.687843 3189 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:19:23.688074 kubelet[3189]: I0702 08:19:23.687993 3189 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:19:23.688074 kubelet[3189]: I0702 08:19:23.688003 3189 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:19:23.688074 kubelet[3189]: I0702 08:19:23.688021 3189 policy_none.go:49] "None policy: Start" Jul 2 08:19:23.688679 kubelet[3189]: I0702 08:19:23.688658 3189 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:19:23.688788 kubelet[3189]: I0702 08:19:23.688779 3189 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:19:23.688788 kubelet[3189]: I0702 08:19:23.688931 3189 state_mem.go:75] "Updated machine memory state" Jul 2 08:19:23.693655 kubelet[3189]: I0702 08:19:23.693630 3189 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:19:23.694108 kubelet[3189]: I0702 08:19:23.694069 3189 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:19:23.694276 kubelet[3189]: I0702 08:19:23.694266 3189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:19:23.711804 kubelet[3189]: I0702 08:19:23.711766 3189 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.718813 kubelet[3189]: I0702 08:19:23.718760 3189 topology_manager.go:215] "Topology Admit Handler" podUID="635c83fc299791663b142dff1b4931ba" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.719332 kubelet[3189]: I0702 08:19:23.719132 3189 topology_manager.go:215] "Topology Admit Handler" podUID="ab4ece8e0b5322870d85ae9e25fa1d22" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.721902 kubelet[3189]: I0702 08:19:23.719748 3189 topology_manager.go:215] "Topology Admit Handler" podUID="6e5cc21568e1634d01668ab1306913a8" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.728439 kubelet[3189]: W0702 08:19:23.728408 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:23.728686 kubelet[3189]: W0702 08:19:23.728667 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:23.728867 kubelet[3189]: E0702 08:19:23.728840 3189 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.1.1-a-7c4c792b73\" already exists" pod="kube-system/kube-scheduler-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.730218 kubelet[3189]: I0702 08:19:23.730196 3189 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.730836 kubelet[3189]: I0702 08:19:23.730432 3189 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.730966 kubelet[3189]: W0702 08:19:23.730937 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:23.731006 kubelet[3189]: E0702 08:19:23.730992 3189 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.915201 kubelet[3189]: I0702 08:19:23.914956 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/635c83fc299791663b142dff1b4931ba-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-7c4c792b73\" (UID: \"635c83fc299791663b142dff1b4931ba\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.915201 kubelet[3189]: I0702 08:19:23.914997 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab4ece8e0b5322870d85ae9e25fa1d22-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" (UID: \"ab4ece8e0b5322870d85ae9e25fa1d22\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.915201 kubelet[3189]: I0702 08:19:23.915019 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab4ece8e0b5322870d85ae9e25fa1d22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" (UID: \"ab4ece8e0b5322870d85ae9e25fa1d22\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.915201 kubelet[3189]: I0702 08:19:23.915040 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.915201 kubelet[3189]: I0702 08:19:23.915058 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.927718 kubelet[3189]: I0702 08:19:23.915075 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.927718 kubelet[3189]: I0702 08:19:23.915093 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.927718 kubelet[3189]: I0702 08:19:23.915108 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e5cc21568e1634d01668ab1306913a8-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-7c4c792b73\" (UID: \"6e5cc21568e1634d01668ab1306913a8\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:23.927718 kubelet[3189]: I0702 08:19:23.915124 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab4ece8e0b5322870d85ae9e25fa1d22-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" (UID: \"ab4ece8e0b5322870d85ae9e25fa1d22\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:24.580106 kubelet[3189]: I0702 08:19:24.580071 3189 apiserver.go:52] "Watching apiserver" Jul 2 08:19:24.614206 kubelet[3189]: I0702 08:19:24.614153 3189 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:19:24.676249 kubelet[3189]: W0702 08:19:24.676212 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:24.676409 kubelet[3189]: E0702 08:19:24.676285 3189 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-7c4c792b73\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:24.680350 kubelet[3189]: W0702 08:19:24.680287 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 08:19:24.680491 kubelet[3189]: E0702 08:19:24.680358 3189 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.1.1-a-7c4c792b73\" already exists" pod="kube-system/kube-scheduler-ci-3975.1.1-a-7c4c792b73" Jul 2 08:19:24.699844 kubelet[3189]: I0702 08:19:24.698681 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-a-7c4c792b73" podStartSLOduration=2.698661123 podStartE2EDuration="2.698661123s" podCreationTimestamp="2024-07-02 08:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:19:24.689972025 +0000 UTC m=+1.179643836" watchObservedRunningTime="2024-07-02 08:19:24.698661123 +0000 UTC m=+1.188332934" Jul 2 08:19:24.706583 kubelet[3189]: I0702 08:19:24.706232 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-7c4c792b73" podStartSLOduration=1.70621298 podStartE2EDuration="1.70621298s" podCreationTimestamp="2024-07-02 08:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:19:24.698881404 +0000 UTC m=+1.188553175" watchObservedRunningTime="2024-07-02 08:19:24.70621298 +0000 UTC m=+1.195884791" Jul 2 08:19:24.717966 kubelet[3189]: I0702 08:19:24.717764 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-a-7c4c792b73" podStartSLOduration=2.717744604 podStartE2EDuration="2.717744604s" podCreationTimestamp="2024-07-02 08:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:19:24.707061421 +0000 UTC m=+1.196733232" watchObservedRunningTime="2024-07-02 08:19:24.717744604 +0000 UTC m=+1.207416415" Jul 2 08:19:28.729675 sudo[2276]: pam_unix(sudo:session): session closed for user root Jul 2 08:19:28.813703 sshd[2273]: pam_unix(sshd:session): session closed for user core Jul 2 08:19:28.820834 systemd[1]: sshd@6-10.200.20.44:22-10.200.16.10:40688.service: Deactivated successfully. Jul 2 08:19:28.824139 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:19:28.824940 systemd[1]: session-9.scope: Consumed 6.929s CPU time, 137.2M memory peak, 0B memory swap peak. Jul 2 08:19:28.828046 systemd-logind[1658]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:19:28.830552 systemd-logind[1658]: Removed session 9. Jul 2 08:19:38.278632 kubelet[3189]: I0702 08:19:38.278466 3189 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:19:38.280735 containerd[1694]: time="2024-07-02T08:19:38.280594352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:19:38.281251 kubelet[3189]: I0702 08:19:38.280819 3189 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:19:39.268759 kubelet[3189]: I0702 08:19:39.267335 3189 topology_manager.go:215] "Topology Admit Handler" podUID="dfab8a25-8c47-4324-82ed-9c7426efff79" podNamespace="kube-system" podName="kube-proxy-lxkqb" Jul 2 08:19:39.278945 systemd[1]: Created slice kubepods-besteffort-poddfab8a25_8c47_4324_82ed_9c7426efff79.slice - libcontainer container kubepods-besteffort-poddfab8a25_8c47_4324_82ed_9c7426efff79.slice. Jul 2 08:19:39.312537 kubelet[3189]: I0702 08:19:39.312371 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfab8a25-8c47-4324-82ed-9c7426efff79-xtables-lock\") pod \"kube-proxy-lxkqb\" (UID: \"dfab8a25-8c47-4324-82ed-9c7426efff79\") " pod="kube-system/kube-proxy-lxkqb" Jul 2 08:19:39.312537 kubelet[3189]: I0702 08:19:39.312406 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfab8a25-8c47-4324-82ed-9c7426efff79-lib-modules\") pod \"kube-proxy-lxkqb\" (UID: \"dfab8a25-8c47-4324-82ed-9c7426efff79\") " pod="kube-system/kube-proxy-lxkqb" Jul 2 08:19:39.312537 kubelet[3189]: I0702 08:19:39.312424 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfab8a25-8c47-4324-82ed-9c7426efff79-kube-proxy\") pod \"kube-proxy-lxkqb\" (UID: \"dfab8a25-8c47-4324-82ed-9c7426efff79\") " pod="kube-system/kube-proxy-lxkqb" Jul 2 08:19:39.312537 kubelet[3189]: I0702 08:19:39.312451 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9nt\" (UniqueName: \"kubernetes.io/projected/dfab8a25-8c47-4324-82ed-9c7426efff79-kube-api-access-7t9nt\") pod \"kube-proxy-lxkqb\" (UID: \"dfab8a25-8c47-4324-82ed-9c7426efff79\") " pod="kube-system/kube-proxy-lxkqb" Jul 2 08:19:39.389287 kubelet[3189]: I0702 08:19:39.389227 3189 topology_manager.go:215] "Topology Admit Handler" podUID="ebf83237-ac66-49e7-8154-60ba510fcacf" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-lxmkj" Jul 2 08:19:39.399207 systemd[1]: Created slice kubepods-besteffort-podebf83237_ac66_49e7_8154_60ba510fcacf.slice - libcontainer container kubepods-besteffort-podebf83237_ac66_49e7_8154_60ba510fcacf.slice. Jul 2 08:19:39.416604 kubelet[3189]: I0702 08:19:39.416345 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ebf83237-ac66-49e7-8154-60ba510fcacf-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-lxmkj\" (UID: \"ebf83237-ac66-49e7-8154-60ba510fcacf\") " pod="tigera-operator/tigera-operator-76ff79f7fd-lxmkj" Jul 2 08:19:39.417812 kubelet[3189]: I0702 08:19:39.417739 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fldp5\" (UniqueName: \"kubernetes.io/projected/ebf83237-ac66-49e7-8154-60ba510fcacf-kube-api-access-fldp5\") pod \"tigera-operator-76ff79f7fd-lxmkj\" (UID: \"ebf83237-ac66-49e7-8154-60ba510fcacf\") " pod="tigera-operator/tigera-operator-76ff79f7fd-lxmkj" Jul 2 08:19:39.589197 containerd[1694]: time="2024-07-02T08:19:39.589071555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxkqb,Uid:dfab8a25-8c47-4324-82ed-9c7426efff79,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:39.636551 containerd[1694]: time="2024-07-02T08:19:39.636289200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:39.636551 containerd[1694]: time="2024-07-02T08:19:39.636388160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:39.636551 containerd[1694]: time="2024-07-02T08:19:39.636422400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:39.636551 containerd[1694]: time="2024-07-02T08:19:39.636440520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:39.659527 systemd[1]: Started cri-containerd-ab08dcf29c52df8f501c739276e12a3126547b83cc4997f14f800d74923527b6.scope - libcontainer container ab08dcf29c52df8f501c739276e12a3126547b83cc4997f14f800d74923527b6. Jul 2 08:19:39.680731 containerd[1694]: time="2024-07-02T08:19:39.680410160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxkqb,Uid:dfab8a25-8c47-4324-82ed-9c7426efff79,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab08dcf29c52df8f501c739276e12a3126547b83cc4997f14f800d74923527b6\"" Jul 2 08:19:39.685203 containerd[1694]: time="2024-07-02T08:19:39.685152928Z" level=info msg="CreateContainer within sandbox \"ab08dcf29c52df8f501c739276e12a3126547b83cc4997f14f800d74923527b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:19:39.703196 containerd[1694]: time="2024-07-02T08:19:39.703146601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-lxmkj,Uid:ebf83237-ac66-49e7-8154-60ba510fcacf,Namespace:tigera-operator,Attempt:0,}" Jul 2 08:19:39.736984 containerd[1694]: time="2024-07-02T08:19:39.736919742Z" level=info msg="CreateContainer within sandbox \"ab08dcf29c52df8f501c739276e12a3126547b83cc4997f14f800d74923527b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"402edeb087c95a5c02541e9b3309b204dc000eb15cd47da7b7593bd21d9eda56\"" Jul 2 08:19:39.737995 containerd[1694]: time="2024-07-02T08:19:39.737902704Z" level=info msg="StartContainer for \"402edeb087c95a5c02541e9b3309b204dc000eb15cd47da7b7593bd21d9eda56\"" Jul 2 08:19:39.768688 systemd[1]: Started cri-containerd-402edeb087c95a5c02541e9b3309b204dc000eb15cd47da7b7593bd21d9eda56.scope - libcontainer container 402edeb087c95a5c02541e9b3309b204dc000eb15cd47da7b7593bd21d9eda56. Jul 2 08:19:39.774577 containerd[1694]: time="2024-07-02T08:19:39.774109449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:39.774986 containerd[1694]: time="2024-07-02T08:19:39.774577810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:39.774986 containerd[1694]: time="2024-07-02T08:19:39.774634970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:39.774986 containerd[1694]: time="2024-07-02T08:19:39.774645930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:39.797511 systemd[1]: Started cri-containerd-f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323.scope - libcontainer container f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323. Jul 2 08:19:39.805745 containerd[1694]: time="2024-07-02T08:19:39.805661706Z" level=info msg="StartContainer for \"402edeb087c95a5c02541e9b3309b204dc000eb15cd47da7b7593bd21d9eda56\" returns successfully" Jul 2 08:19:39.837721 containerd[1694]: time="2024-07-02T08:19:39.837681604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-lxmkj,Uid:ebf83237-ac66-49e7-8154-60ba510fcacf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323\"" Jul 2 08:19:39.841389 containerd[1694]: time="2024-07-02T08:19:39.841188330Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 08:19:41.334599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048007831.mount: Deactivated successfully. Jul 2 08:19:41.719013 containerd[1694]: time="2024-07-02T08:19:41.718952041Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:41.721748 containerd[1694]: time="2024-07-02T08:19:41.721597685Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473594" Jul 2 08:19:41.725666 containerd[1694]: time="2024-07-02T08:19:41.725609333Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:41.731273 containerd[1694]: time="2024-07-02T08:19:41.731216343Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:41.732062 containerd[1694]: time="2024-07-02T08:19:41.732031384Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.890788214s" Jul 2 08:19:41.732252 containerd[1694]: time="2024-07-02T08:19:41.732166184Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 08:19:41.734877 containerd[1694]: time="2024-07-02T08:19:41.734740469Z" level=info msg="CreateContainer within sandbox \"f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 08:19:41.767525 containerd[1694]: time="2024-07-02T08:19:41.767464048Z" level=info msg="CreateContainer within sandbox \"f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916\"" Jul 2 08:19:41.769083 containerd[1694]: time="2024-07-02T08:19:41.768191569Z" level=info msg="StartContainer for \"a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916\"" Jul 2 08:19:41.796522 systemd[1]: Started cri-containerd-a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916.scope - libcontainer container a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916. Jul 2 08:19:41.824732 containerd[1694]: time="2024-07-02T08:19:41.824675191Z" level=info msg="StartContainer for \"a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916\" returns successfully" Jul 2 08:19:42.703984 kubelet[3189]: I0702 08:19:42.703751 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lxkqb" podStartSLOduration=3.703730539 podStartE2EDuration="3.703730539s" podCreationTimestamp="2024-07-02 08:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:19:40.699130039 +0000 UTC m=+17.188801850" watchObservedRunningTime="2024-07-02 08:19:42.703730539 +0000 UTC m=+19.193402350" Jul 2 08:19:45.594564 kubelet[3189]: I0702 08:19:45.594493 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-lxmkj" podStartSLOduration=4.701321275 podStartE2EDuration="6.594474973s" podCreationTimestamp="2024-07-02 08:19:39 +0000 UTC" firstStartedPulling="2024-07-02 08:19:39.840128088 +0000 UTC m=+16.329799899" lastFinishedPulling="2024-07-02 08:19:41.733281786 +0000 UTC m=+18.222953597" observedRunningTime="2024-07-02 08:19:42.703959179 +0000 UTC m=+19.193630990" watchObservedRunningTime="2024-07-02 08:19:45.594474973 +0000 UTC m=+22.084146784" Jul 2 08:19:45.594979 kubelet[3189]: I0702 08:19:45.594660 3189 topology_manager.go:215] "Topology Admit Handler" podUID="33659ef7-3d62-4469-8e3f-5119eb6af7de" podNamespace="calico-system" podName="calico-typha-68685c664-j9dh2" Jul 2 08:19:45.606905 systemd[1]: Created slice kubepods-besteffort-pod33659ef7_3d62_4469_8e3f_5119eb6af7de.slice - libcontainer container kubepods-besteffort-pod33659ef7_3d62_4469_8e3f_5119eb6af7de.slice. Jul 2 08:19:45.659096 kubelet[3189]: I0702 08:19:45.659047 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33659ef7-3d62-4469-8e3f-5119eb6af7de-tigera-ca-bundle\") pod \"calico-typha-68685c664-j9dh2\" (UID: \"33659ef7-3d62-4469-8e3f-5119eb6af7de\") " pod="calico-system/calico-typha-68685c664-j9dh2" Jul 2 08:19:45.659096 kubelet[3189]: I0702 08:19:45.659088 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/33659ef7-3d62-4469-8e3f-5119eb6af7de-typha-certs\") pod \"calico-typha-68685c664-j9dh2\" (UID: \"33659ef7-3d62-4469-8e3f-5119eb6af7de\") " pod="calico-system/calico-typha-68685c664-j9dh2" Jul 2 08:19:45.659260 kubelet[3189]: I0702 08:19:45.659114 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ph66\" (UniqueName: \"kubernetes.io/projected/33659ef7-3d62-4469-8e3f-5119eb6af7de-kube-api-access-9ph66\") pod \"calico-typha-68685c664-j9dh2\" (UID: \"33659ef7-3d62-4469-8e3f-5119eb6af7de\") " pod="calico-system/calico-typha-68685c664-j9dh2" Jul 2 08:19:45.680099 kubelet[3189]: I0702 08:19:45.680044 3189 topology_manager.go:215] "Topology Admit Handler" podUID="e079383c-d8bd-4496-a2a0-36ddc99e313e" podNamespace="calico-system" podName="calico-node-d9bzs" Jul 2 08:19:45.688280 systemd[1]: Created slice kubepods-besteffort-pode079383c_d8bd_4496_a2a0_36ddc99e313e.slice - libcontainer container kubepods-besteffort-pode079383c_d8bd_4496_a2a0_36ddc99e313e.slice. Jul 2 08:19:45.760810 kubelet[3189]: I0702 08:19:45.759466 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e079383c-d8bd-4496-a2a0-36ddc99e313e-node-certs\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.760810 kubelet[3189]: I0702 08:19:45.759513 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-cni-net-dir\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.760810 kubelet[3189]: I0702 08:19:45.759542 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-var-lib-calico\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.760810 kubelet[3189]: I0702 08:19:45.759560 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-cni-bin-dir\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.760810 kubelet[3189]: I0702 08:19:45.759577 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffx6\" (UniqueName: \"kubernetes.io/projected/e079383c-d8bd-4496-a2a0-36ddc99e313e-kube-api-access-tffx6\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761051 kubelet[3189]: I0702 08:19:45.759596 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-var-run-calico\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761051 kubelet[3189]: I0702 08:19:45.759609 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-lib-modules\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761051 kubelet[3189]: I0702 08:19:45.759624 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-policysync\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761051 kubelet[3189]: I0702 08:19:45.759639 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e079383c-d8bd-4496-a2a0-36ddc99e313e-tigera-ca-bundle\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761051 kubelet[3189]: I0702 08:19:45.759677 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-xtables-lock\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761166 kubelet[3189]: I0702 08:19:45.759692 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-cni-log-dir\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.761166 kubelet[3189]: I0702 08:19:45.759710 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e079383c-d8bd-4496-a2a0-36ddc99e313e-flexvol-driver-host\") pod \"calico-node-d9bzs\" (UID: \"e079383c-d8bd-4496-a2a0-36ddc99e313e\") " pod="calico-system/calico-node-d9bzs" Jul 2 08:19:45.797448 kubelet[3189]: I0702 08:19:45.797407 3189 topology_manager.go:215] "Topology Admit Handler" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" podNamespace="calico-system" podName="csi-node-driver-cvsr9" Jul 2 08:19:45.797998 kubelet[3189]: E0702 08:19:45.797942 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:45.860946 kubelet[3189]: I0702 08:19:45.860816 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ab198-8e93-4749-942e-804a7ceb88e7-kubelet-dir\") pod \"csi-node-driver-cvsr9\" (UID: \"ec1ab198-8e93-4749-942e-804a7ceb88e7\") " pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:45.861693 kubelet[3189]: I0702 08:19:45.861062 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ec1ab198-8e93-4749-942e-804a7ceb88e7-varrun\") pod \"csi-node-driver-cvsr9\" (UID: \"ec1ab198-8e93-4749-942e-804a7ceb88e7\") " pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:45.861693 kubelet[3189]: I0702 08:19:45.861093 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ab198-8e93-4749-942e-804a7ceb88e7-socket-dir\") pod \"csi-node-driver-cvsr9\" (UID: \"ec1ab198-8e93-4749-942e-804a7ceb88e7\") " pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:45.861693 kubelet[3189]: I0702 08:19:45.861211 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ab198-8e93-4749-942e-804a7ceb88e7-registration-dir\") pod \"csi-node-driver-cvsr9\" (UID: \"ec1ab198-8e93-4749-942e-804a7ceb88e7\") " pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:45.861693 kubelet[3189]: I0702 08:19:45.861229 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5ch\" (UniqueName: \"kubernetes.io/projected/ec1ab198-8e93-4749-942e-804a7ceb88e7-kube-api-access-jt5ch\") pod \"csi-node-driver-cvsr9\" (UID: \"ec1ab198-8e93-4749-942e-804a7ceb88e7\") " pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:45.866472 kubelet[3189]: E0702 08:19:45.866432 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.866472 kubelet[3189]: W0702 08:19:45.866458 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.866754 kubelet[3189]: E0702 08:19:45.866484 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.866754 kubelet[3189]: E0702 08:19:45.866735 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.866754 kubelet[3189]: W0702 08:19:45.866745 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.870342 kubelet[3189]: E0702 08:19:45.866856 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.880968 kubelet[3189]: E0702 08:19:45.880518 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.880968 kubelet[3189]: W0702 08:19:45.880544 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.880968 kubelet[3189]: E0702 08:19:45.880566 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.920399 containerd[1694]: time="2024-07-02T08:19:45.919882744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68685c664-j9dh2,Uid:33659ef7-3d62-4469-8e3f-5119eb6af7de,Namespace:calico-system,Attempt:0,}" Jul 2 08:19:45.963163 kubelet[3189]: E0702 08:19:45.963080 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.963163 kubelet[3189]: W0702 08:19:45.963109 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.963163 kubelet[3189]: E0702 08:19:45.963129 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.963448 kubelet[3189]: E0702 08:19:45.963428 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.963719 kubelet[3189]: W0702 08:19:45.963443 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.963719 kubelet[3189]: E0702 08:19:45.963525 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.964450 kubelet[3189]: E0702 08:19:45.963982 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.964450 kubelet[3189]: W0702 08:19:45.964000 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.964450 kubelet[3189]: E0702 08:19:45.964020 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.964450 kubelet[3189]: E0702 08:19:45.964265 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.964450 kubelet[3189]: W0702 08:19:45.964276 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.964450 kubelet[3189]: E0702 08:19:45.964326 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.967991 kubelet[3189]: E0702 08:19:45.965087 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.967991 kubelet[3189]: W0702 08:19:45.965105 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.967991 kubelet[3189]: E0702 08:19:45.967525 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.967991 kubelet[3189]: W0702 08:19:45.967538 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.967991 kubelet[3189]: E0702 08:19:45.967706 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.967991 kubelet[3189]: W0702 08:19:45.967713 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.967991 kubelet[3189]: E0702 08:19:45.967867 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.967991 kubelet[3189]: W0702 08:19:45.967875 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.967991 kubelet[3189]: E0702 08:19:45.967886 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.968489 kubelet[3189]: E0702 08:19:45.968270 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.968489 kubelet[3189]: E0702 08:19:45.968298 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.968489 kubelet[3189]: E0702 08:19:45.968336 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.968753 kubelet[3189]: E0702 08:19:45.968531 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.968753 kubelet[3189]: W0702 08:19:45.968543 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.968753 kubelet[3189]: E0702 08:19:45.968559 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.968753 kubelet[3189]: E0702 08:19:45.968753 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.968860 kubelet[3189]: W0702 08:19:45.968763 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.968860 kubelet[3189]: E0702 08:19:45.968780 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.970154 kubelet[3189]: E0702 08:19:45.969011 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.970154 kubelet[3189]: W0702 08:19:45.969027 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.970154 kubelet[3189]: E0702 08:19:45.969038 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.970154 kubelet[3189]: E0702 08:19:45.969199 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.970154 kubelet[3189]: W0702 08:19:45.969212 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.970154 kubelet[3189]: E0702 08:19:45.969222 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.970376 kubelet[3189]: E0702 08:19:45.970293 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.970404 kubelet[3189]: W0702 08:19:45.970378 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.970429 kubelet[3189]: E0702 08:19:45.970414 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.971895 kubelet[3189]: E0702 08:19:45.971182 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.971895 kubelet[3189]: W0702 08:19:45.971199 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.971895 kubelet[3189]: E0702 08:19:45.971681 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.971895 kubelet[3189]: W0702 08:19:45.971691 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.971895 kubelet[3189]: E0702 08:19:45.971718 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.971895 kubelet[3189]: E0702 08:19:45.971766 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.973081 kubelet[3189]: E0702 08:19:45.972430 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.973081 kubelet[3189]: W0702 08:19:45.972453 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.973081 kubelet[3189]: E0702 08:19:45.972544 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.973081 kubelet[3189]: E0702 08:19:45.972697 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.973081 kubelet[3189]: W0702 08:19:45.972706 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.973734 kubelet[3189]: E0702 08:19:45.973510 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.973734 kubelet[3189]: E0702 08:19:45.973660 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.973734 kubelet[3189]: W0702 08:19:45.973669 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.973734 kubelet[3189]: E0702 08:19:45.973690 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.973995 kubelet[3189]: E0702 08:19:45.973845 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.973995 kubelet[3189]: W0702 08:19:45.973853 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.973995 kubelet[3189]: E0702 08:19:45.973869 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.974935 kubelet[3189]: E0702 08:19:45.974108 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.974935 kubelet[3189]: W0702 08:19:45.974149 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.975050 kubelet[3189]: E0702 08:19:45.975003 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.975489 kubelet[3189]: E0702 08:19:45.975461 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.975489 kubelet[3189]: W0702 08:19:45.975479 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.975489 kubelet[3189]: E0702 08:19:45.975495 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.976084 kubelet[3189]: E0702 08:19:45.975648 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.976084 kubelet[3189]: W0702 08:19:45.975666 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.976084 kubelet[3189]: E0702 08:19:45.975675 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.976084 kubelet[3189]: E0702 08:19:45.975868 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.976084 kubelet[3189]: W0702 08:19:45.975881 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.976084 kubelet[3189]: E0702 08:19:45.975899 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.976881 kubelet[3189]: E0702 08:19:45.976326 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.976881 kubelet[3189]: W0702 08:19:45.976342 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.976881 kubelet[3189]: E0702 08:19:45.976365 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.976881 kubelet[3189]: E0702 08:19:45.976635 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.976881 kubelet[3189]: W0702 08:19:45.976647 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.976881 kubelet[3189]: E0702 08:19:45.976660 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.993747 containerd[1694]: time="2024-07-02T08:19:45.993326447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d9bzs,Uid:e079383c-d8bd-4496-a2a0-36ddc99e313e,Namespace:calico-system,Attempt:0,}" Jul 2 08:19:45.994043 kubelet[3189]: E0702 08:19:45.993940 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:45.994043 kubelet[3189]: W0702 08:19:45.993971 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:45.994043 kubelet[3189]: E0702 08:19:45.993993 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:45.994794 containerd[1694]: time="2024-07-02T08:19:45.994265887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:45.995762 containerd[1694]: time="2024-07-02T08:19:45.995562607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:45.995762 containerd[1694]: time="2024-07-02T08:19:45.995599767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:45.995762 containerd[1694]: time="2024-07-02T08:19:45.995611727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:46.018559 systemd[1]: Started cri-containerd-3b1dc998735f6789ee8ebd28f3ecded3b1b53e8336f44555b70ae7dda3414ebf.scope - libcontainer container 3b1dc998735f6789ee8ebd28f3ecded3b1b53e8336f44555b70ae7dda3414ebf. Jul 2 08:19:46.057328 containerd[1694]: time="2024-07-02T08:19:46.056837993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:46.057708 containerd[1694]: time="2024-07-02T08:19:46.056999913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:46.057708 containerd[1694]: time="2024-07-02T08:19:46.057038633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:46.057708 containerd[1694]: time="2024-07-02T08:19:46.057050753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:46.089562 systemd[1]: Started cri-containerd-4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11.scope - libcontainer container 4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11. Jul 2 08:19:46.101507 containerd[1694]: time="2024-07-02T08:19:46.101451663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68685c664-j9dh2,Uid:33659ef7-3d62-4469-8e3f-5119eb6af7de,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b1dc998735f6789ee8ebd28f3ecded3b1b53e8336f44555b70ae7dda3414ebf\"" Jul 2 08:19:46.107510 containerd[1694]: time="2024-07-02T08:19:46.107466342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 08:19:46.134814 containerd[1694]: time="2024-07-02T08:19:46.134595296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d9bzs,Uid:e079383c-d8bd-4496-a2a0-36ddc99e313e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\"" Jul 2 08:19:47.618553 kubelet[3189]: E0702 08:19:47.618275 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:47.821393 containerd[1694]: time="2024-07-02T08:19:47.821066317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:47.823926 containerd[1694]: time="2024-07-02T08:19:47.823871636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 08:19:47.829259 containerd[1694]: time="2024-07-02T08:19:47.828402235Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:47.835929 containerd[1694]: time="2024-07-02T08:19:47.835809354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:47.836322 containerd[1694]: time="2024-07-02T08:19:47.836227473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.728714891s" Jul 2 08:19:47.836322 containerd[1694]: time="2024-07-02T08:19:47.836259113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 08:19:47.838396 containerd[1694]: time="2024-07-02T08:19:47.838302553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 08:19:47.853253 containerd[1694]: time="2024-07-02T08:19:47.853182710Z" level=info msg="CreateContainer within sandbox \"3b1dc998735f6789ee8ebd28f3ecded3b1b53e8336f44555b70ae7dda3414ebf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:19:47.884392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796495445.mount: Deactivated successfully. Jul 2 08:19:47.900062 containerd[1694]: time="2024-07-02T08:19:47.899952139Z" level=info msg="CreateContainer within sandbox \"3b1dc998735f6789ee8ebd28f3ecded3b1b53e8336f44555b70ae7dda3414ebf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d10e4538c9c7e1254c108d29113f1dcfecef2b5057ac33f143945216f69158c\"" Jul 2 08:19:47.901160 containerd[1694]: time="2024-07-02T08:19:47.901114819Z" level=info msg="StartContainer for \"0d10e4538c9c7e1254c108d29113f1dcfecef2b5057ac33f143945216f69158c\"" Jul 2 08:19:47.932292 systemd[1]: Started cri-containerd-0d10e4538c9c7e1254c108d29113f1dcfecef2b5057ac33f143945216f69158c.scope - libcontainer container 0d10e4538c9c7e1254c108d29113f1dcfecef2b5057ac33f143945216f69158c. Jul 2 08:19:47.968784 containerd[1694]: time="2024-07-02T08:19:47.968720764Z" level=info msg="StartContainer for \"0d10e4538c9c7e1254c108d29113f1dcfecef2b5057ac33f143945216f69158c\" returns successfully" Jul 2 08:19:48.774244 kubelet[3189]: E0702 08:19:48.774206 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.774244 kubelet[3189]: W0702 08:19:48.774233 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.774800 kubelet[3189]: E0702 08:19:48.774255 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.774800 kubelet[3189]: E0702 08:19:48.774458 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.774800 kubelet[3189]: W0702 08:19:48.774468 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.774800 kubelet[3189]: E0702 08:19:48.774477 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.774800 kubelet[3189]: E0702 08:19:48.774638 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.774800 kubelet[3189]: W0702 08:19:48.774658 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.774800 kubelet[3189]: E0702 08:19:48.774667 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.775058 kubelet[3189]: E0702 08:19:48.774853 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.775058 kubelet[3189]: W0702 08:19:48.774862 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.775058 kubelet[3189]: E0702 08:19:48.774871 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.775058 kubelet[3189]: E0702 08:19:48.775040 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.775058 kubelet[3189]: W0702 08:19:48.775048 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.775058 kubelet[3189]: E0702 08:19:48.775056 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.775321 kubelet[3189]: E0702 08:19:48.775287 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.775321 kubelet[3189]: W0702 08:19:48.775302 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775327 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775504 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777012 kubelet[3189]: W0702 08:19:48.775512 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775520 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775736 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777012 kubelet[3189]: W0702 08:19:48.775745 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775753 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775915 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777012 kubelet[3189]: W0702 08:19:48.775923 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777012 kubelet[3189]: E0702 08:19:48.775931 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776061 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777233 kubelet[3189]: W0702 08:19:48.776075 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776086 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776214 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777233 kubelet[3189]: W0702 08:19:48.776228 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776236 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776393 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777233 kubelet[3189]: W0702 08:19:48.776400 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776407 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777233 kubelet[3189]: E0702 08:19:48.776543 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777496 kubelet[3189]: W0702 08:19:48.776557 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777496 kubelet[3189]: E0702 08:19:48.776565 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777496 kubelet[3189]: E0702 08:19:48.776698 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777496 kubelet[3189]: W0702 08:19:48.776712 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777496 kubelet[3189]: E0702 08:19:48.776719 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.777496 kubelet[3189]: E0702 08:19:48.776915 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.777496 kubelet[3189]: W0702 08:19:48.776924 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.777496 kubelet[3189]: E0702 08:19:48.776932 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.791465 kubelet[3189]: E0702 08:19:48.791375 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.791465 kubelet[3189]: W0702 08:19:48.791399 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.791465 kubelet[3189]: E0702 08:19:48.791420 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.791758 kubelet[3189]: E0702 08:19:48.791596 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.791758 kubelet[3189]: W0702 08:19:48.791604 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.791758 kubelet[3189]: E0702 08:19:48.791621 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.791943 kubelet[3189]: E0702 08:19:48.791803 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.791943 kubelet[3189]: W0702 08:19:48.791811 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.791943 kubelet[3189]: E0702 08:19:48.791827 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.792198 kubelet[3189]: E0702 08:19:48.792079 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.792198 kubelet[3189]: W0702 08:19:48.792093 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.792198 kubelet[3189]: E0702 08:19:48.792112 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.792445 kubelet[3189]: E0702 08:19:48.792433 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.792573 kubelet[3189]: W0702 08:19:48.792525 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.792573 kubelet[3189]: E0702 08:19:48.792546 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.792792 kubelet[3189]: E0702 08:19:48.792762 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.792792 kubelet[3189]: W0702 08:19:48.792785 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.792868 kubelet[3189]: E0702 08:19:48.792807 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.792989 kubelet[3189]: E0702 08:19:48.792971 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.792989 kubelet[3189]: W0702 08:19:48.792987 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.793089 kubelet[3189]: E0702 08:19:48.793070 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.793186 kubelet[3189]: E0702 08:19:48.793168 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.793186 kubelet[3189]: W0702 08:19:48.793182 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.793289 kubelet[3189]: E0702 08:19:48.793257 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.793392 kubelet[3189]: E0702 08:19:48.793376 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.793392 kubelet[3189]: W0702 08:19:48.793389 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.793569 kubelet[3189]: E0702 08:19:48.793479 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.793697 kubelet[3189]: E0702 08:19:48.793675 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.793697 kubelet[3189]: W0702 08:19:48.793692 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.793759 kubelet[3189]: E0702 08:19:48.793708 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.793939 kubelet[3189]: E0702 08:19:48.793922 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.793939 kubelet[3189]: W0702 08:19:48.793936 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.794064 kubelet[3189]: E0702 08:19:48.793954 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.794353 kubelet[3189]: E0702 08:19:48.794326 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.794353 kubelet[3189]: W0702 08:19:48.794349 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.794432 kubelet[3189]: E0702 08:19:48.794365 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.794582 kubelet[3189]: E0702 08:19:48.794555 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.794582 kubelet[3189]: W0702 08:19:48.794580 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.794647 kubelet[3189]: E0702 08:19:48.794614 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.794899 kubelet[3189]: E0702 08:19:48.794880 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.794899 kubelet[3189]: W0702 08:19:48.794896 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.795016 kubelet[3189]: E0702 08:19:48.794996 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.795909 kubelet[3189]: E0702 08:19:48.795877 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.795909 kubelet[3189]: W0702 08:19:48.795905 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.796402 kubelet[3189]: E0702 08:19:48.796371 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.796402 kubelet[3189]: W0702 08:19:48.796399 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.796475 kubelet[3189]: E0702 08:19:48.796413 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.796475 kubelet[3189]: E0702 08:19:48.796442 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.798034 kubelet[3189]: E0702 08:19:48.797406 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.798034 kubelet[3189]: W0702 08:19:48.797419 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.798034 kubelet[3189]: E0702 08:19:48.797432 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:48.798439 kubelet[3189]: E0702 08:19:48.798405 3189 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:19:48.798439 kubelet[3189]: W0702 08:19:48.798427 3189 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:19:48.798521 kubelet[3189]: E0702 08:19:48.798448 3189 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:19:49.080214 containerd[1694]: time="2024-07-02T08:19:49.080064137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:49.083805 containerd[1694]: time="2024-07-02T08:19:49.083748946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 08:19:49.088054 containerd[1694]: time="2024-07-02T08:19:49.087949795Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:49.093470 containerd[1694]: time="2024-07-02T08:19:49.093393568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:49.094191 containerd[1694]: time="2024-07-02T08:19:49.094053289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.255668456s" Jul 2 08:19:49.094191 containerd[1694]: time="2024-07-02T08:19:49.094094890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 08:19:49.097518 containerd[1694]: time="2024-07-02T08:19:49.097454457Z" level=info msg="CreateContainer within sandbox \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:19:49.146929 containerd[1694]: time="2024-07-02T08:19:49.146849611Z" level=info msg="CreateContainer within sandbox \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1\"" Jul 2 08:19:49.149152 containerd[1694]: time="2024-07-02T08:19:49.147640853Z" level=info msg="StartContainer for \"627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1\"" Jul 2 08:19:49.190562 systemd[1]: Started cri-containerd-627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1.scope - libcontainer container 627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1. Jul 2 08:19:49.221951 containerd[1694]: time="2024-07-02T08:19:49.221897663Z" level=info msg="StartContainer for \"627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1\" returns successfully" Jul 2 08:19:49.230535 systemd[1]: cri-containerd-627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1.scope: Deactivated successfully. Jul 2 08:19:49.619702 kubelet[3189]: E0702 08:19:49.619631 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:49.721342 kubelet[3189]: I0702 08:19:49.720223 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:19:49.733193 kubelet[3189]: I0702 08:19:49.733073 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68685c664-j9dh2" podStartSLOduration=3.001983906 podStartE2EDuration="4.733053397s" podCreationTimestamp="2024-07-02 08:19:45 +0000 UTC" firstStartedPulling="2024-07-02 08:19:46.106574182 +0000 UTC m=+22.596245953" lastFinishedPulling="2024-07-02 08:19:47.837643633 +0000 UTC m=+24.327315444" observedRunningTime="2024-07-02 08:19:48.728390913 +0000 UTC m=+25.218062724" watchObservedRunningTime="2024-07-02 08:19:49.733053397 +0000 UTC m=+26.222725208" Jul 2 08:19:49.843180 systemd[1]: run-containerd-runc-k8s.io-627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1-runc.tLs0Dx.mount: Deactivated successfully. Jul 2 08:19:49.843284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1-rootfs.mount: Deactivated successfully. Jul 2 08:19:50.180303 containerd[1694]: time="2024-07-02T08:19:50.180195904Z" level=info msg="shim disconnected" id=627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1 namespace=k8s.io Jul 2 08:19:50.181045 containerd[1694]: time="2024-07-02T08:19:50.180285585Z" level=warning msg="cleaning up after shim disconnected" id=627ec54271b60c80832cbcd61c0047eae683a9ddb4124430932eff51d4fb57d1 namespace=k8s.io Jul 2 08:19:50.181045 containerd[1694]: time="2024-07-02T08:19:50.180535025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:19:50.726451 containerd[1694]: time="2024-07-02T08:19:50.725214836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 08:19:51.620368 kubelet[3189]: E0702 08:19:51.619118 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:53.368615 containerd[1694]: time="2024-07-02T08:19:53.368513908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:53.371181 containerd[1694]: time="2024-07-02T08:19:53.370981314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 08:19:53.375386 containerd[1694]: time="2024-07-02T08:19:53.375304484Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:53.380246 containerd[1694]: time="2024-07-02T08:19:53.380142215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:53.381088 containerd[1694]: time="2024-07-02T08:19:53.380968257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.654849379s" Jul 2 08:19:53.381088 containerd[1694]: time="2024-07-02T08:19:53.381002657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 08:19:53.383496 containerd[1694]: time="2024-07-02T08:19:53.383451183Z" level=info msg="CreateContainer within sandbox \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 08:19:53.427642 containerd[1694]: time="2024-07-02T08:19:53.427579644Z" level=info msg="CreateContainer within sandbox \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c\"" Jul 2 08:19:53.429661 containerd[1694]: time="2024-07-02T08:19:53.428402166Z" level=info msg="StartContainer for \"2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c\"" Jul 2 08:19:53.463528 systemd[1]: Started cri-containerd-2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c.scope - libcontainer container 2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c. Jul 2 08:19:53.492716 containerd[1694]: time="2024-07-02T08:19:53.492537633Z" level=info msg="StartContainer for \"2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c\" returns successfully" Jul 2 08:19:53.621344 kubelet[3189]: E0702 08:19:53.620685 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:54.474940 containerd[1694]: time="2024-07-02T08:19:54.474865293Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:19:54.477931 systemd[1]: cri-containerd-2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c.scope: Deactivated successfully. Jul 2 08:19:54.500888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c-rootfs.mount: Deactivated successfully. Jul 2 08:19:54.541621 kubelet[3189]: I0702 08:19:54.541588 3189 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:19:54.824567 kubelet[3189]: I0702 08:19:54.564003 3189 topology_manager.go:215] "Topology Admit Handler" podUID="564b4aa8-7677-4a18-9b2e-7b70c8540c90" podNamespace="calico-system" podName="calico-kube-controllers-6ccd77b596-zjlqp" Jul 2 08:19:54.824567 kubelet[3189]: I0702 08:19:54.570246 3189 topology_manager.go:215] "Topology Admit Handler" podUID="0622a859-f890-42d7-963e-91f435085671" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sbktf" Jul 2 08:19:54.824567 kubelet[3189]: I0702 08:19:54.570434 3189 topology_manager.go:215] "Topology Admit Handler" podUID="fc8c61fa-97a4-4862-b730-0646754d9bdf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-87z6p" Jul 2 08:19:54.824567 kubelet[3189]: I0702 08:19:54.634013 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0622a859-f890-42d7-963e-91f435085671-config-volume\") pod \"coredns-7db6d8ff4d-sbktf\" (UID: \"0622a859-f890-42d7-963e-91f435085671\") " pod="kube-system/coredns-7db6d8ff4d-sbktf" Jul 2 08:19:54.824567 kubelet[3189]: I0702 08:19:54.634056 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdn5s\" (UniqueName: \"kubernetes.io/projected/564b4aa8-7677-4a18-9b2e-7b70c8540c90-kube-api-access-xdn5s\") pod \"calico-kube-controllers-6ccd77b596-zjlqp\" (UID: \"564b4aa8-7677-4a18-9b2e-7b70c8540c90\") " pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" Jul 2 08:19:54.824567 kubelet[3189]: I0702 08:19:54.634079 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc8c61fa-97a4-4862-b730-0646754d9bdf-config-volume\") pod \"coredns-7db6d8ff4d-87z6p\" (UID: \"fc8c61fa-97a4-4862-b730-0646754d9bdf\") " pod="kube-system/coredns-7db6d8ff4d-87z6p" Jul 2 08:19:54.576141 systemd[1]: Created slice kubepods-besteffort-pod564b4aa8_7677_4a18_9b2e_7b70c8540c90.slice - libcontainer container kubepods-besteffort-pod564b4aa8_7677_4a18_9b2e_7b70c8540c90.slice. Jul 2 08:19:54.825194 kubelet[3189]: I0702 08:19:54.634098 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtlqk\" (UniqueName: \"kubernetes.io/projected/0622a859-f890-42d7-963e-91f435085671-kube-api-access-xtlqk\") pod \"coredns-7db6d8ff4d-sbktf\" (UID: \"0622a859-f890-42d7-963e-91f435085671\") " pod="kube-system/coredns-7db6d8ff4d-sbktf" Jul 2 08:19:54.825194 kubelet[3189]: I0702 08:19:54.634117 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/564b4aa8-7677-4a18-9b2e-7b70c8540c90-tigera-ca-bundle\") pod \"calico-kube-controllers-6ccd77b596-zjlqp\" (UID: \"564b4aa8-7677-4a18-9b2e-7b70c8540c90\") " pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" Jul 2 08:19:54.825194 kubelet[3189]: I0702 08:19:54.634134 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jlz\" (UniqueName: \"kubernetes.io/projected/fc8c61fa-97a4-4862-b730-0646754d9bdf-kube-api-access-c7jlz\") pod \"coredns-7db6d8ff4d-87z6p\" (UID: \"fc8c61fa-97a4-4862-b730-0646754d9bdf\") " pod="kube-system/coredns-7db6d8ff4d-87z6p" Jul 2 08:19:54.583687 systemd[1]: Created slice kubepods-burstable-podfc8c61fa_97a4_4862_b730_0646754d9bdf.slice - libcontainer container kubepods-burstable-podfc8c61fa_97a4_4862_b730_0646754d9bdf.slice. Jul 2 08:19:54.592242 systemd[1]: Created slice kubepods-burstable-pod0622a859_f890_42d7_963e_91f435085671.slice - libcontainer container kubepods-burstable-pod0622a859_f890_42d7_963e_91f435085671.slice. Jul 2 08:19:55.127675 containerd[1694]: time="2024-07-02T08:19:55.127572240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd77b596-zjlqp,Uid:564b4aa8-7677-4a18-9b2e-7b70c8540c90,Namespace:calico-system,Attempt:0,}" Jul 2 08:19:55.132966 containerd[1694]: time="2024-07-02T08:19:55.132581446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87z6p,Uid:fc8c61fa-97a4-4862-b730-0646754d9bdf,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:55.132966 containerd[1694]: time="2024-07-02T08:19:55.132818846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sbktf,Uid:0622a859-f890-42d7-963e-91f435085671,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:55.626680 systemd[1]: Created slice kubepods-besteffort-podec1ab198_8e93_4749_942e_804a7ceb88e7.slice - libcontainer container kubepods-besteffort-podec1ab198_8e93_4749_942e_804a7ceb88e7.slice. Jul 2 08:19:55.629342 containerd[1694]: time="2024-07-02T08:19:55.629259756Z" level=info msg="shim disconnected" id=2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c namespace=k8s.io Jul 2 08:19:55.629342 containerd[1694]: time="2024-07-02T08:19:55.629332596Z" level=warning msg="cleaning up after shim disconnected" id=2af958d567c75a4b20f1204e37b8e949f810a6c9fff221df6e4347fb792d9e3c namespace=k8s.io Jul 2 08:19:55.629342 containerd[1694]: time="2024-07-02T08:19:55.629341996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:19:55.630894 containerd[1694]: time="2024-07-02T08:19:55.630860438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsr9,Uid:ec1ab198-8e93-4749-942e-804a7ceb88e7,Namespace:calico-system,Attempt:0,}" Jul 2 08:19:55.748976 containerd[1694]: time="2024-07-02T08:19:55.748836587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 08:19:55.771601 containerd[1694]: time="2024-07-02T08:19:55.771556136Z" level=error msg="Failed to destroy network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.772786 containerd[1694]: time="2024-07-02T08:19:55.772641777Z" level=error msg="encountered an error cleaning up failed sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.772786 containerd[1694]: time="2024-07-02T08:19:55.772706537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd77b596-zjlqp,Uid:564b4aa8-7677-4a18-9b2e-7b70c8540c90,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.773567 kubelet[3189]: E0702 08:19:55.773068 3189 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.773567 kubelet[3189]: E0702 08:19:55.773142 3189 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" Jul 2 08:19:55.773567 kubelet[3189]: E0702 08:19:55.773438 3189 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" Jul 2 08:19:55.773709 kubelet[3189]: E0702 08:19:55.773514 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ccd77b596-zjlqp_calico-system(564b4aa8-7677-4a18-9b2e-7b70c8540c90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ccd77b596-zjlqp_calico-system(564b4aa8-7677-4a18-9b2e-7b70c8540c90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" podUID="564b4aa8-7677-4a18-9b2e-7b70c8540c90" Jul 2 08:19:55.841326 containerd[1694]: time="2024-07-02T08:19:55.841189424Z" level=error msg="Failed to destroy network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.842047 containerd[1694]: time="2024-07-02T08:19:55.842011385Z" level=error msg="encountered an error cleaning up failed sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.842593 containerd[1694]: time="2024-07-02T08:19:55.842411106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sbktf,Uid:0622a859-f890-42d7-963e-91f435085671,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.844498 kubelet[3189]: E0702 08:19:55.842957 3189 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.844498 kubelet[3189]: E0702 08:19:55.843025 3189 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sbktf" Jul 2 08:19:55.844498 kubelet[3189]: E0702 08:19:55.843044 3189 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sbktf" Jul 2 08:19:55.845541 kubelet[3189]: E0702 08:19:55.843109 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sbktf_kube-system(0622a859-f890-42d7-963e-91f435085671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sbktf_kube-system(0622a859-f890-42d7-963e-91f435085671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sbktf" podUID="0622a859-f890-42d7-963e-91f435085671" Jul 2 08:19:55.845657 containerd[1694]: time="2024-07-02T08:19:55.845094669Z" level=error msg="Failed to destroy network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.845657 containerd[1694]: time="2024-07-02T08:19:55.845493110Z" level=error msg="encountered an error cleaning up failed sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.845657 containerd[1694]: time="2024-07-02T08:19:55.845546310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsr9,Uid:ec1ab198-8e93-4749-942e-804a7ceb88e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.845816 kubelet[3189]: E0702 08:19:55.845774 3189 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.845855 kubelet[3189]: E0702 08:19:55.845830 3189 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:55.845882 kubelet[3189]: E0702 08:19:55.845850 3189 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvsr9" Jul 2 08:19:55.845912 kubelet[3189]: E0702 08:19:55.845890 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cvsr9_calico-system(ec1ab198-8e93-4749-942e-804a7ceb88e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cvsr9_calico-system(ec1ab198-8e93-4749-942e-804a7ceb88e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:55.849057 containerd[1694]: time="2024-07-02T08:19:55.848984514Z" level=error msg="Failed to destroy network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.849500 containerd[1694]: time="2024-07-02T08:19:55.849381275Z" level=error msg="encountered an error cleaning up failed sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.849731 containerd[1694]: time="2024-07-02T08:19:55.849690115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87z6p,Uid:fc8c61fa-97a4-4862-b730-0646754d9bdf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.849943 kubelet[3189]: E0702 08:19:55.849900 3189 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:55.850028 kubelet[3189]: E0702 08:19:55.849958 3189 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-87z6p" Jul 2 08:19:55.850028 kubelet[3189]: E0702 08:19:55.849977 3189 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-87z6p" Jul 2 08:19:55.850115 kubelet[3189]: E0702 08:19:55.850020 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-87z6p_kube-system(fc8c61fa-97a4-4862-b730-0646754d9bdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-87z6p_kube-system(fc8c61fa-97a4-4862-b730-0646754d9bdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-87z6p" podUID="fc8c61fa-97a4-4862-b730-0646754d9bdf" Jul 2 08:19:56.655107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda-shm.mount: Deactivated successfully. Jul 2 08:19:56.655202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82-shm.mount: Deactivated successfully. Jul 2 08:19:56.655258 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88-shm.mount: Deactivated successfully. Jul 2 08:19:56.744901 kubelet[3189]: I0702 08:19:56.744849 3189 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:19:56.747421 kubelet[3189]: I0702 08:19:56.746741 3189 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:19:56.747579 containerd[1694]: time="2024-07-02T08:19:56.747477893Z" level=info msg="StopPodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\"" Jul 2 08:19:56.747832 containerd[1694]: time="2024-07-02T08:19:56.747756773Z" level=info msg="Ensure that sandbox 08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d in task-service has been cleanup successfully" Jul 2 08:19:56.748199 containerd[1694]: time="2024-07-02T08:19:56.747897453Z" level=info msg="StopPodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\"" Jul 2 08:19:56.748199 containerd[1694]: time="2024-07-02T08:19:56.748103534Z" level=info msg="Ensure that sandbox 3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82 in task-service has been cleanup successfully" Jul 2 08:19:56.751809 kubelet[3189]: I0702 08:19:56.751405 3189 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:19:56.753071 containerd[1694]: time="2024-07-02T08:19:56.752990220Z" level=info msg="StopPodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\"" Jul 2 08:19:56.754353 containerd[1694]: time="2024-07-02T08:19:56.754149381Z" level=info msg="Ensure that sandbox 79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda in task-service has been cleanup successfully" Jul 2 08:19:56.758823 kubelet[3189]: I0702 08:19:56.758088 3189 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:19:56.762345 containerd[1694]: time="2024-07-02T08:19:56.762196071Z" level=info msg="StopPodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\"" Jul 2 08:19:56.762856 containerd[1694]: time="2024-07-02T08:19:56.762495432Z" level=info msg="Ensure that sandbox e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88 in task-service has been cleanup successfully" Jul 2 08:19:56.806923 containerd[1694]: time="2024-07-02T08:19:56.806875608Z" level=error msg="StopPodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" failed" error="failed to destroy network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:56.807288 kubelet[3189]: E0702 08:19:56.807255 3189 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:19:56.807613 kubelet[3189]: E0702 08:19:56.807547 3189 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda"} Jul 2 08:19:56.807683 containerd[1694]: time="2024-07-02T08:19:56.807584769Z" level=error msg="StopPodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" failed" error="failed to destroy network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:56.807786 kubelet[3189]: E0702 08:19:56.807770 3189 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc8c61fa-97a4-4862-b730-0646754d9bdf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:19:56.807964 kubelet[3189]: E0702 08:19:56.807867 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc8c61fa-97a4-4862-b730-0646754d9bdf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-87z6p" podUID="fc8c61fa-97a4-4862-b730-0646754d9bdf" Jul 2 08:19:56.809575 kubelet[3189]: E0702 08:19:56.807709 3189 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:19:56.809575 kubelet[3189]: E0702 08:19:56.809459 3189 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d"} Jul 2 08:19:56.809575 kubelet[3189]: E0702 08:19:56.809525 3189 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec1ab198-8e93-4749-942e-804a7ceb88e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:19:56.809575 kubelet[3189]: E0702 08:19:56.809548 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec1ab198-8e93-4749-942e-804a7ceb88e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvsr9" podUID="ec1ab198-8e93-4749-942e-804a7ceb88e7" Jul 2 08:19:56.835877 containerd[1694]: time="2024-07-02T08:19:56.835365164Z" level=error msg="StopPodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" failed" error="failed to destroy network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:56.836013 kubelet[3189]: E0702 08:19:56.835626 3189 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:19:56.836013 kubelet[3189]: E0702 08:19:56.835706 3189 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82"} Jul 2 08:19:56.836013 kubelet[3189]: E0702 08:19:56.835746 3189 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0622a859-f890-42d7-963e-91f435085671\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:19:56.836013 kubelet[3189]: E0702 08:19:56.835779 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0622a859-f890-42d7-963e-91f435085671\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sbktf" podUID="0622a859-f890-42d7-963e-91f435085671" Jul 2 08:19:56.839028 containerd[1694]: time="2024-07-02T08:19:56.838789049Z" level=error msg="StopPodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" failed" error="failed to destroy network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:19:56.839263 kubelet[3189]: E0702 08:19:56.839020 3189 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:19:56.839263 kubelet[3189]: E0702 08:19:56.839064 3189 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88"} Jul 2 08:19:56.839263 kubelet[3189]: E0702 08:19:56.839097 3189 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"564b4aa8-7677-4a18-9b2e-7b70c8540c90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:19:56.839263 kubelet[3189]: E0702 08:19:56.839122 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"564b4aa8-7677-4a18-9b2e-7b70c8540c90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" podUID="564b4aa8-7677-4a18-9b2e-7b70c8540c90" Jul 2 08:19:59.369916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463803827.mount: Deactivated successfully. Jul 2 08:19:59.787595 containerd[1694]: time="2024-07-02T08:19:59.787535214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:59.789935 containerd[1694]: time="2024-07-02T08:19:59.789786539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 08:19:59.794445 containerd[1694]: time="2024-07-02T08:19:59.794381268Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:59.799232 containerd[1694]: time="2024-07-02T08:19:59.799170518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:19:59.799910 containerd[1694]: time="2024-07-02T08:19:59.799690799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 4.050781212s" Jul 2 08:19:59.799910 containerd[1694]: time="2024-07-02T08:19:59.799728559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 08:19:59.809976 containerd[1694]: time="2024-07-02T08:19:59.809915820Z" level=info msg="CreateContainer within sandbox \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 08:19:59.863541 containerd[1694]: time="2024-07-02T08:19:59.863448849Z" level=info msg="CreateContainer within sandbox \"4e59d1d29bfcc9382f0aeb46a933f9bdaeb862973d993d8eac9596c13d490c11\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3666010b35603f86945ea5cc3890f84735df6afc1764d829effd4d03fcc85f1a\"" Jul 2 08:19:59.864676 containerd[1694]: time="2024-07-02T08:19:59.864647332Z" level=info msg="StartContainer for \"3666010b35603f86945ea5cc3890f84735df6afc1764d829effd4d03fcc85f1a\"" Jul 2 08:19:59.892522 systemd[1]: Started cri-containerd-3666010b35603f86945ea5cc3890f84735df6afc1764d829effd4d03fcc85f1a.scope - libcontainer container 3666010b35603f86945ea5cc3890f84735df6afc1764d829effd4d03fcc85f1a. Jul 2 08:19:59.926806 containerd[1694]: time="2024-07-02T08:19:59.926699498Z" level=info msg="StartContainer for \"3666010b35603f86945ea5cc3890f84735df6afc1764d829effd4d03fcc85f1a\" returns successfully" Jul 2 08:20:00.040382 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 08:20:00.041092 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 08:20:00.310335 kubelet[3189]: I0702 08:20:00.309146 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:20:01.845709 systemd-networkd[1332]: vxlan.calico: Link UP Jul 2 08:20:01.845716 systemd-networkd[1332]: vxlan.calico: Gained carrier Jul 2 08:20:03.178478 systemd-networkd[1332]: vxlan.calico: Gained IPv6LL Jul 2 08:20:08.622921 containerd[1694]: time="2024-07-02T08:20:08.621593225Z" level=info msg="StopPodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\"" Jul 2 08:20:08.627469 containerd[1694]: time="2024-07-02T08:20:08.627096356Z" level=info msg="StopPodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\"" Jul 2 08:20:08.696923 kubelet[3189]: I0702 08:20:08.696680 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d9bzs" podStartSLOduration=10.035955305 podStartE2EDuration="23.696661852s" podCreationTimestamp="2024-07-02 08:19:45 +0000 UTC" firstStartedPulling="2024-07-02 08:19:46.140077974 +0000 UTC m=+22.629749785" lastFinishedPulling="2024-07-02 08:19:59.800784561 +0000 UTC m=+36.290456332" observedRunningTime="2024-07-02 08:20:00.788015136 +0000 UTC m=+37.277686987" watchObservedRunningTime="2024-07-02 08:20:08.696661852 +0000 UTC m=+45.186333663" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.697 [INFO][4400] k8s.go 608: Cleaning up netns ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.698 [INFO][4400] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" iface="eth0" netns="/var/run/netns/cni-f63567f3-e824-1dfa-590b-36b78525847e" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.698 [INFO][4400] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" iface="eth0" netns="/var/run/netns/cni-f63567f3-e824-1dfa-590b-36b78525847e" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.698 [INFO][4400] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" iface="eth0" netns="/var/run/netns/cni-f63567f3-e824-1dfa-590b-36b78525847e" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.698 [INFO][4400] k8s.go 615: Releasing IP address(es) ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.699 [INFO][4400] utils.go 188: Calico CNI releasing IP address ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.725 [INFO][4411] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.726 [INFO][4411] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.726 [INFO][4411] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.739 [WARNING][4411] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.739 [INFO][4411] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.741 [INFO][4411] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:08.745935 containerd[1694]: 2024-07-02 08:20:08.744 [INFO][4400] k8s.go 621: Teardown processing complete. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:08.748274 systemd[1]: run-netns-cni\x2df63567f3\x2de824\x2d1dfa\x2d590b\x2d36b78525847e.mount: Deactivated successfully. Jul 2 08:20:08.748835 containerd[1694]: time="2024-07-02T08:20:08.748272114Z" level=info msg="TearDown network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" successfully" Jul 2 08:20:08.748835 containerd[1694]: time="2024-07-02T08:20:08.748330954Z" level=info msg="StopPodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" returns successfully" Jul 2 08:20:08.749895 containerd[1694]: time="2024-07-02T08:20:08.749677876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd77b596-zjlqp,Uid:564b4aa8-7677-4a18-9b2e-7b70c8540c90,Namespace:calico-system,Attempt:1,}" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.704 [INFO][4396] k8s.go 608: Cleaning up netns ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.705 [INFO][4396] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" iface="eth0" netns="/var/run/netns/cni-0a2d758e-0e1d-d19c-128d-fa7e987590b6" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.705 [INFO][4396] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" iface="eth0" netns="/var/run/netns/cni-0a2d758e-0e1d-d19c-128d-fa7e987590b6" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.705 [INFO][4396] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" iface="eth0" netns="/var/run/netns/cni-0a2d758e-0e1d-d19c-128d-fa7e987590b6" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.705 [INFO][4396] k8s.go 615: Releasing IP address(es) ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.705 [INFO][4396] utils.go 188: Calico CNI releasing IP address ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.739 [INFO][4416] ipam_plugin.go 411: Releasing address using handleID ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.740 [INFO][4416] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.741 [INFO][4416] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.755 [WARNING][4416] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.755 [INFO][4416] ipam_plugin.go 439: Releasing address using workloadID ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.757 [INFO][4416] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:08.762971 containerd[1694]: 2024-07-02 08:20:08.760 [INFO][4396] k8s.go 621: Teardown processing complete. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:08.763394 containerd[1694]: time="2024-07-02T08:20:08.763231023Z" level=info msg="TearDown network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" successfully" Jul 2 08:20:08.763394 containerd[1694]: time="2024-07-02T08:20:08.763259943Z" level=info msg="StopPodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" returns successfully" Jul 2 08:20:08.765192 systemd[1]: run-netns-cni\x2d0a2d758e\x2d0e1d\x2dd19c\x2d128d\x2dfa7e987590b6.mount: Deactivated successfully. Jul 2 08:20:08.766859 containerd[1694]: time="2024-07-02T08:20:08.765935868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsr9,Uid:ec1ab198-8e93-4749-942e-804a7ceb88e7,Namespace:calico-system,Attempt:1,}" Jul 2 08:20:08.954631 systemd-networkd[1332]: cali196a473fc3c: Link UP Jul 2 08:20:08.954982 systemd-networkd[1332]: cali196a473fc3c: Gained carrier Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.859 [INFO][4423] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0 csi-node-driver- calico-system ec1ab198-8e93-4749-942e-804a7ceb88e7 712 0 2024-07-02 08:19:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-a-7c4c792b73 csi-node-driver-cvsr9 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali196a473fc3c [] []}} ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.859 [INFO][4423] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.896 [INFO][4446] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" HandleID="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.913 [INFO][4446] ipam_plugin.go 264: Auto assigning IP ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" HandleID="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-7c4c792b73", "pod":"csi-node-driver-cvsr9", "timestamp":"2024-07-02 08:20:08.896882566 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7c4c792b73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.913 [INFO][4446] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.914 [INFO][4446] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.914 [INFO][4446] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7c4c792b73' Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.916 [INFO][4446] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.921 [INFO][4446] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.926 [INFO][4446] ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.928 [INFO][4446] ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.930 [INFO][4446] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.930 [INFO][4446] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.932 [INFO][4446] ipam.go 1685: Creating new handle: k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.936 [INFO][4446] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.945 [INFO][4446] ipam.go 1216: Successfully claimed IPs: [192.168.65.65/26] block=192.168.65.64/26 handle="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.945 [INFO][4446] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.65/26] handle="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.946 [INFO][4446] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:08.983604 containerd[1694]: 2024-07-02 08:20:08.946 [INFO][4446] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.65.65/26] IPv6=[] ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" HandleID="k8s-pod-network.c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.985166 containerd[1694]: 2024-07-02 08:20:08.950 [INFO][4423] k8s.go 386: Populated endpoint ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec1ab198-8e93-4749-942e-804a7ceb88e7", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"", Pod:"csi-node-driver-cvsr9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali196a473fc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:08.985166 containerd[1694]: 2024-07-02 08:20:08.951 [INFO][4423] k8s.go 387: Calico CNI using IPs: [192.168.65.65/32] ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.985166 containerd[1694]: 2024-07-02 08:20:08.951 [INFO][4423] dataplane_linux.go 68: Setting the host side veth name to cali196a473fc3c ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.985166 containerd[1694]: 2024-07-02 08:20:08.953 [INFO][4423] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:08.985166 containerd[1694]: 2024-07-02 08:20:08.953 [INFO][4423] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec1ab198-8e93-4749-942e-804a7ceb88e7", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e", Pod:"csi-node-driver-cvsr9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali196a473fc3c", MAC:"a2:12:34:4f:ae:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:08.985166 containerd[1694]: 2024-07-02 08:20:08.974 [INFO][4423] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e" Namespace="calico-system" Pod="csi-node-driver-cvsr9" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:09.016096 systemd-networkd[1332]: cali47145dfdfaf: Link UP Jul 2 08:20:09.017684 systemd-networkd[1332]: cali47145dfdfaf: Gained carrier Jul 2 08:20:09.028039 containerd[1694]: time="2024-07-02T08:20:09.026563140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:20:09.028039 containerd[1694]: time="2024-07-02T08:20:09.026633620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:09.028039 containerd[1694]: time="2024-07-02T08:20:09.026662300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:20:09.028039 containerd[1694]: time="2024-07-02T08:20:09.026677941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.875 [INFO][4434] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0 calico-kube-controllers-6ccd77b596- calico-system 564b4aa8-7677-4a18-9b2e-7b70c8540c90 711 0 2024-07-02 08:19:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ccd77b596 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-a-7c4c792b73 calico-kube-controllers-6ccd77b596-zjlqp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali47145dfdfaf [] []}} ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.875 [INFO][4434] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.905 [INFO][4450] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" HandleID="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.920 [INFO][4450] ipam_plugin.go 264: Auto assigning IP ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" HandleID="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005997f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-7c4c792b73", "pod":"calico-kube-controllers-6ccd77b596-zjlqp", "timestamp":"2024-07-02 08:20:08.905025942 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7c4c792b73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.920 [INFO][4450] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.946 [INFO][4450] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.946 [INFO][4450] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7c4c792b73' Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.950 [INFO][4450] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.962 [INFO][4450] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.974 [INFO][4450] ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.982 [INFO][4450] ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.987 [INFO][4450] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.987 [INFO][4450] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.989 [INFO][4450] ipam.go 1685: Creating new handle: k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418 Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:08.995 [INFO][4450] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:09.009 [INFO][4450] ipam.go 1216: Successfully claimed IPs: [192.168.65.66/26] block=192.168.65.64/26 handle="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:09.009 [INFO][4450] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.66/26] handle="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:09.009 [INFO][4450] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:09.040567 containerd[1694]: 2024-07-02 08:20:09.009 [INFO][4450] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.65.66/26] IPv6=[] ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" HandleID="k8s-pod-network.3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.041191 containerd[1694]: 2024-07-02 08:20:09.012 [INFO][4434] k8s.go 386: Populated endpoint ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0", GenerateName:"calico-kube-controllers-6ccd77b596-", Namespace:"calico-system", SelfLink:"", UID:"564b4aa8-7677-4a18-9b2e-7b70c8540c90", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd77b596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"", Pod:"calico-kube-controllers-6ccd77b596-zjlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47145dfdfaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:09.041191 containerd[1694]: 2024-07-02 08:20:09.012 [INFO][4434] k8s.go 387: Calico CNI using IPs: [192.168.65.66/32] ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.041191 containerd[1694]: 2024-07-02 08:20:09.012 [INFO][4434] dataplane_linux.go 68: Setting the host side veth name to cali47145dfdfaf ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.041191 containerd[1694]: 2024-07-02 08:20:09.017 [INFO][4434] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.041191 containerd[1694]: 2024-07-02 08:20:09.018 [INFO][4434] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0", GenerateName:"calico-kube-controllers-6ccd77b596-", Namespace:"calico-system", SelfLink:"", UID:"564b4aa8-7677-4a18-9b2e-7b70c8540c90", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd77b596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418", Pod:"calico-kube-controllers-6ccd77b596-zjlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47145dfdfaf", MAC:"5a:27:a8:cb:59:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:09.041191 containerd[1694]: 2024-07-02 08:20:09.036 [INFO][4434] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418" Namespace="calico-system" Pod="calico-kube-controllers-6ccd77b596-zjlqp" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:09.062553 systemd[1]: Started cri-containerd-c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e.scope - libcontainer container c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e. Jul 2 08:20:09.085824 containerd[1694]: time="2024-07-02T08:20:09.083832373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:20:09.085824 containerd[1694]: time="2024-07-02T08:20:09.084052693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:09.085824 containerd[1694]: time="2024-07-02T08:20:09.084075853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:20:09.085824 containerd[1694]: time="2024-07-02T08:20:09.084198574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:09.095300 containerd[1694]: time="2024-07-02T08:20:09.095261075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvsr9,Uid:ec1ab198-8e93-4749-942e-804a7ceb88e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e\"" Jul 2 08:20:09.098067 containerd[1694]: time="2024-07-02T08:20:09.097942361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 08:20:09.108561 systemd[1]: Started cri-containerd-3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418.scope - libcontainer container 3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418. Jul 2 08:20:09.145944 containerd[1694]: time="2024-07-02T08:20:09.145805375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccd77b596-zjlqp,Uid:564b4aa8-7677-4a18-9b2e-7b70c8540c90,Namespace:calico-system,Attempt:1,} returns sandbox id \"3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418\"" Jul 2 08:20:09.300737 kubelet[3189]: I0702 08:20:09.300425 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:20:10.090471 systemd-networkd[1332]: cali196a473fc3c: Gained IPv6LL Jul 2 08:20:10.474434 systemd-networkd[1332]: cali47145dfdfaf: Gained IPv6LL Jul 2 08:20:10.619161 containerd[1694]: time="2024-07-02T08:20:10.618877033Z" level=info msg="StopPodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\"" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.675 [INFO][4629] k8s.go 608: Cleaning up netns ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.676 [INFO][4629] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" iface="eth0" netns="/var/run/netns/cni-ff9dc6f7-d7f0-973c-16a8-5ba615789d19" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.677 [INFO][4629] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" iface="eth0" netns="/var/run/netns/cni-ff9dc6f7-d7f0-973c-16a8-5ba615789d19" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.677 [INFO][4629] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" iface="eth0" netns="/var/run/netns/cni-ff9dc6f7-d7f0-973c-16a8-5ba615789d19" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.677 [INFO][4629] k8s.go 615: Releasing IP address(es) ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.677 [INFO][4629] utils.go 188: Calico CNI releasing IP address ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.701 [INFO][4639] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.701 [INFO][4639] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.701 [INFO][4639] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.710 [WARNING][4639] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.711 [INFO][4639] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.714 [INFO][4639] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:10.720380 containerd[1694]: 2024-07-02 08:20:10.716 [INFO][4629] k8s.go 621: Teardown processing complete. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:10.720984 containerd[1694]: time="2024-07-02T08:20:10.720592531Z" level=info msg="TearDown network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" successfully" Jul 2 08:20:10.720984 containerd[1694]: time="2024-07-02T08:20:10.720628851Z" level=info msg="StopPodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" returns successfully" Jul 2 08:20:10.722133 systemd[1]: run-netns-cni\x2dff9dc6f7\x2dd7f0\x2d973c\x2d16a8\x2d5ba615789d19.mount: Deactivated successfully. Jul 2 08:20:10.723753 containerd[1694]: time="2024-07-02T08:20:10.723181935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sbktf,Uid:0622a859-f890-42d7-963e-91f435085671,Namespace:kube-system,Attempt:1,}" Jul 2 08:20:10.789066 containerd[1694]: time="2024-07-02T08:20:10.788188249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:10.792709 containerd[1694]: time="2024-07-02T08:20:10.792666217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 08:20:10.798074 containerd[1694]: time="2024-07-02T08:20:10.798029746Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:10.803938 containerd[1694]: time="2024-07-02T08:20:10.803875996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:10.804388 containerd[1694]: time="2024-07-02T08:20:10.804352717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.706129076s" Jul 2 08:20:10.804388 containerd[1694]: time="2024-07-02T08:20:10.804386077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 08:20:10.806724 containerd[1694]: time="2024-07-02T08:20:10.806596241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 08:20:10.808201 containerd[1694]: time="2024-07-02T08:20:10.807986603Z" level=info msg="CreateContainer within sandbox \"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 08:20:10.861711 containerd[1694]: time="2024-07-02T08:20:10.861569857Z" level=info msg="CreateContainer within sandbox \"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4173416d4941acd45fda21bea11fe6eb2116789982c18b6fd73d51fcd036c7c7\"" Jul 2 08:20:10.864352 containerd[1694]: time="2024-07-02T08:20:10.862047738Z" level=info msg="StartContainer for \"4173416d4941acd45fda21bea11fe6eb2116789982c18b6fd73d51fcd036c7c7\"" Jul 2 08:20:10.906570 systemd[1]: Started cri-containerd-4173416d4941acd45fda21bea11fe6eb2116789982c18b6fd73d51fcd036c7c7.scope - libcontainer container 4173416d4941acd45fda21bea11fe6eb2116789982c18b6fd73d51fcd036c7c7. Jul 2 08:20:10.936659 systemd-networkd[1332]: cali02f319f368e: Link UP Jul 2 08:20:10.939475 systemd-networkd[1332]: cali02f319f368e: Gained carrier Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.831 [INFO][4645] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0 coredns-7db6d8ff4d- kube-system 0622a859-f890-42d7-963e-91f435085671 727 0 2024-07-02 08:19:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-7c4c792b73 coredns-7db6d8ff4d-sbktf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02f319f368e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.831 [INFO][4645] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.869 [INFO][4657] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" HandleID="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.886 [INFO][4657] ipam_plugin.go 264: Auto assigning IP ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" HandleID="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003182a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-7c4c792b73", "pod":"coredns-7db6d8ff4d-sbktf", "timestamp":"2024-07-02 08:20:10.869770671 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7c4c792b73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.886 [INFO][4657] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.886 [INFO][4657] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.886 [INFO][4657] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7c4c792b73' Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.891 [INFO][4657] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.898 [INFO][4657] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.906 [INFO][4657] ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.909 [INFO][4657] ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.912 [INFO][4657] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.912 [INFO][4657] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.915 [INFO][4657] ipam.go 1685: Creating new handle: k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.922 [INFO][4657] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.927 [INFO][4657] ipam.go 1216: Successfully claimed IPs: [192.168.65.67/26] block=192.168.65.64/26 handle="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.927 [INFO][4657] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.67/26] handle="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.927 [INFO][4657] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:10.960218 containerd[1694]: 2024-07-02 08:20:10.927 [INFO][4657] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.65.67/26] IPv6=[] ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" HandleID="k8s-pod-network.231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.961866 containerd[1694]: 2024-07-02 08:20:10.931 [INFO][4645] k8s.go 386: Populated endpoint ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0622a859-f890-42d7-963e-91f435085671", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"", Pod:"coredns-7db6d8ff4d-sbktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f319f368e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:10.961866 containerd[1694]: 2024-07-02 08:20:10.931 [INFO][4645] k8s.go 387: Calico CNI using IPs: [192.168.65.67/32] ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.961866 containerd[1694]: 2024-07-02 08:20:10.931 [INFO][4645] dataplane_linux.go 68: Setting the host side veth name to cali02f319f368e ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.961866 containerd[1694]: 2024-07-02 08:20:10.938 [INFO][4645] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.961866 containerd[1694]: 2024-07-02 08:20:10.939 [INFO][4645] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0622a859-f890-42d7-963e-91f435085671", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e", Pod:"coredns-7db6d8ff4d-sbktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f319f368e", MAC:"fa:76:8a:ae:cd:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:10.961866 containerd[1694]: 2024-07-02 08:20:10.955 [INFO][4645] k8s.go 500: Wrote updated endpoint to datastore ContainerID="231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sbktf" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:10.969716 containerd[1694]: time="2024-07-02T08:20:10.969651566Z" level=info msg="StartContainer for \"4173416d4941acd45fda21bea11fe6eb2116789982c18b6fd73d51fcd036c7c7\" returns successfully" Jul 2 08:20:10.993918 containerd[1694]: time="2024-07-02T08:20:10.993766808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:20:10.993918 containerd[1694]: time="2024-07-02T08:20:10.993833048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:10.993918 containerd[1694]: time="2024-07-02T08:20:10.993849688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:20:10.993918 containerd[1694]: time="2024-07-02T08:20:10.993868248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:11.009519 systemd[1]: Started cri-containerd-231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e.scope - libcontainer container 231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e. Jul 2 08:20:11.052730 containerd[1694]: time="2024-07-02T08:20:11.051738629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sbktf,Uid:0622a859-f890-42d7-963e-91f435085671,Namespace:kube-system,Attempt:1,} returns sandbox id \"231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e\"" Jul 2 08:20:11.057379 containerd[1694]: time="2024-07-02T08:20:11.057303279Z" level=info msg="CreateContainer within sandbox \"231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:20:11.098234 containerd[1694]: time="2024-07-02T08:20:11.098182110Z" level=info msg="CreateContainer within sandbox \"231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"057986f2c0c36cd706567789e54f8ca3835f31fb62de2ab7985eb453c06f57fe\"" Jul 2 08:20:11.098893 containerd[1694]: time="2024-07-02T08:20:11.098827351Z" level=info msg="StartContainer for \"057986f2c0c36cd706567789e54f8ca3835f31fb62de2ab7985eb453c06f57fe\"" Jul 2 08:20:11.125529 systemd[1]: Started cri-containerd-057986f2c0c36cd706567789e54f8ca3835f31fb62de2ab7985eb453c06f57fe.scope - libcontainer container 057986f2c0c36cd706567789e54f8ca3835f31fb62de2ab7985eb453c06f57fe. Jul 2 08:20:11.156204 containerd[1694]: time="2024-07-02T08:20:11.155960891Z" level=info msg="StartContainer for \"057986f2c0c36cd706567789e54f8ca3835f31fb62de2ab7985eb453c06f57fe\" returns successfully" Jul 2 08:20:11.619888 containerd[1694]: time="2024-07-02T08:20:11.619834101Z" level=info msg="StopPodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\"" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.663 [INFO][4799] k8s.go 608: Cleaning up netns ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.664 [INFO][4799] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" iface="eth0" netns="/var/run/netns/cni-fad22bbd-25b1-6cb2-0d35-36a9ed433f1e" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.664 [INFO][4799] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" iface="eth0" netns="/var/run/netns/cni-fad22bbd-25b1-6cb2-0d35-36a9ed433f1e" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.664 [INFO][4799] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" iface="eth0" netns="/var/run/netns/cni-fad22bbd-25b1-6cb2-0d35-36a9ed433f1e" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.664 [INFO][4799] k8s.go 615: Releasing IP address(es) ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.664 [INFO][4799] utils.go 188: Calico CNI releasing IP address ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.689 [INFO][4805] ipam_plugin.go 411: Releasing address using handleID ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.689 [INFO][4805] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.689 [INFO][4805] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.697 [WARNING][4805] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.697 [INFO][4805] ipam_plugin.go 439: Releasing address using workloadID ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.700 [INFO][4805] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:11.703408 containerd[1694]: 2024-07-02 08:20:11.702 [INFO][4799] k8s.go 621: Teardown processing complete. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:11.703863 containerd[1694]: time="2024-07-02T08:20:11.703649368Z" level=info msg="TearDown network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" successfully" Jul 2 08:20:11.703863 containerd[1694]: time="2024-07-02T08:20:11.703683248Z" level=info msg="StopPodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" returns successfully" Jul 2 08:20:11.704681 containerd[1694]: time="2024-07-02T08:20:11.704644289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87z6p,Uid:fc8c61fa-97a4-4862-b730-0646754d9bdf,Namespace:kube-system,Attempt:1,}" Jul 2 08:20:11.777390 systemd[1]: run-netns-cni\x2dfad22bbd\x2d25b1\x2d6cb2\x2d0d35\x2d36a9ed433f1e.mount: Deactivated successfully. Jul 2 08:20:11.814342 kubelet[3189]: I0702 08:20:11.812689 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sbktf" podStartSLOduration=32.812668798 podStartE2EDuration="32.812668798s" podCreationTimestamp="2024-07-02 08:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:20:11.811521156 +0000 UTC m=+48.301192927" watchObservedRunningTime="2024-07-02 08:20:11.812668798 +0000 UTC m=+48.302340609" Jul 2 08:20:11.896528 systemd-networkd[1332]: cali78f5facd66d: Link UP Jul 2 08:20:11.898756 systemd-networkd[1332]: cali78f5facd66d: Gained carrier Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.784 [INFO][4812] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0 coredns-7db6d8ff4d- kube-system fc8c61fa-97a4-4862-b730-0646754d9bdf 739 0 2024-07-02 08:19:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-7c4c792b73 coredns-7db6d8ff4d-87z6p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali78f5facd66d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.784 [INFO][4812] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.825 [INFO][4824] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" HandleID="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.853 [INFO][4824] ipam_plugin.go 264: Auto assigning IP ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" HandleID="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316c60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-7c4c792b73", "pod":"coredns-7db6d8ff4d-87z6p", "timestamp":"2024-07-02 08:20:11.825554581 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7c4c792b73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.853 [INFO][4824] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.853 [INFO][4824] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.853 [INFO][4824] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7c4c792b73' Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.857 [INFO][4824] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.864 [INFO][4824] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.870 [INFO][4824] ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.873 [INFO][4824] ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.876 [INFO][4824] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.876 [INFO][4824] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.878 [INFO][4824] ipam.go 1685: Creating new handle: k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4 Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.883 [INFO][4824] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.890 [INFO][4824] ipam.go 1216: Successfully claimed IPs: [192.168.65.68/26] block=192.168.65.64/26 handle="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.890 [INFO][4824] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.68/26] handle="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.890 [INFO][4824] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:11.918995 containerd[1694]: 2024-07-02 08:20:11.890 [INFO][4824] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.65.68/26] IPv6=[] ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" HandleID="k8s-pod-network.c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.919783 containerd[1694]: 2024-07-02 08:20:11.893 [INFO][4812] k8s.go 386: Populated endpoint ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fc8c61fa-97a4-4862-b730-0646754d9bdf", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"", Pod:"coredns-7db6d8ff4d-87z6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78f5facd66d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:11.919783 containerd[1694]: 2024-07-02 08:20:11.893 [INFO][4812] k8s.go 387: Calico CNI using IPs: [192.168.65.68/32] ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.919783 containerd[1694]: 2024-07-02 08:20:11.893 [INFO][4812] dataplane_linux.go 68: Setting the host side veth name to cali78f5facd66d ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.919783 containerd[1694]: 2024-07-02 08:20:11.898 [INFO][4812] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.919783 containerd[1694]: 2024-07-02 08:20:11.899 [INFO][4812] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fc8c61fa-97a4-4862-b730-0646754d9bdf", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4", Pod:"coredns-7db6d8ff4d-87z6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78f5facd66d", MAC:"7a:88:bb:a9:70:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:11.919783 containerd[1694]: 2024-07-02 08:20:11.915 [INFO][4812] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87z6p" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:11.950109 containerd[1694]: time="2024-07-02T08:20:11.949802598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:20:11.950109 containerd[1694]: time="2024-07-02T08:20:11.949873518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:11.950109 containerd[1694]: time="2024-07-02T08:20:11.949893558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:20:11.950109 containerd[1694]: time="2024-07-02T08:20:11.949906558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:11.974942 systemd[1]: Started cri-containerd-c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4.scope - libcontainer container c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4. Jul 2 08:20:12.009545 containerd[1694]: time="2024-07-02T08:20:12.009489702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87z6p,Uid:fc8c61fa-97a4-4862-b730-0646754d9bdf,Namespace:kube-system,Attempt:1,} returns sandbox id \"c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4\"" Jul 2 08:20:12.013256 containerd[1694]: time="2024-07-02T08:20:12.013200268Z" level=info msg="CreateContainer within sandbox \"c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:20:12.055472 containerd[1694]: time="2024-07-02T08:20:12.055416942Z" level=info msg="CreateContainer within sandbox \"c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9e7cc7640d143166312c816a3e5e15a402cd04d5b4ce5e80feb08b2a1e9c266\"" Jul 2 08:20:12.056329 containerd[1694]: time="2024-07-02T08:20:12.056254624Z" level=info msg="StartContainer for \"a9e7cc7640d143166312c816a3e5e15a402cd04d5b4ce5e80feb08b2a1e9c266\"" Jul 2 08:20:12.079512 systemd[1]: Started cri-containerd-a9e7cc7640d143166312c816a3e5e15a402cd04d5b4ce5e80feb08b2a1e9c266.scope - libcontainer container a9e7cc7640d143166312c816a3e5e15a402cd04d5b4ce5e80feb08b2a1e9c266. Jul 2 08:20:12.114564 containerd[1694]: time="2024-07-02T08:20:12.114460085Z" level=info msg="StartContainer for \"a9e7cc7640d143166312c816a3e5e15a402cd04d5b4ce5e80feb08b2a1e9c266\" returns successfully" Jul 2 08:20:12.202502 systemd-networkd[1332]: cali02f319f368e: Gained IPv6LL Jul 2 08:20:13.098509 systemd-networkd[1332]: cali78f5facd66d: Gained IPv6LL Jul 2 08:20:13.819548 kubelet[3189]: I0702 08:20:13.819476 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-87z6p" podStartSLOduration=34.819457103 podStartE2EDuration="34.819457103s" podCreationTimestamp="2024-07-02 08:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:20:12.819255716 +0000 UTC m=+49.308927527" watchObservedRunningTime="2024-07-02 08:20:13.819457103 +0000 UTC m=+50.309128914" Jul 2 08:20:15.829320 update_engine[1662]: I0702 08:20:15.829267 1662 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.829349 1662 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.829547 1662 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830032 1662 omaha_request_params.cc:62] Current group set to beta Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830134 1662 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830139 1662 update_attempter.cc:643] Scheduling an action processor start. Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830153 1662 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830183 1662 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830235 1662 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830238 1662 omaha_request_action.cc:272] Request: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.830242 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.831187 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:20:16.023661 update_engine[1662]: I0702 08:20:15.831533 1662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:20:16.023661 update_engine[1662]: E0702 08:20:15.849119 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:20:16.024115 locksmithd[1710]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 08:20:16.024377 update_engine[1662]: I0702 08:20:15.849232 1662 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 08:20:16.047551 containerd[1694]: time="2024-07-02T08:20:16.047493994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:16.051824 containerd[1694]: time="2024-07-02T08:20:16.051655202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 08:20:16.058335 containerd[1694]: time="2024-07-02T08:20:16.058139613Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:16.064889 containerd[1694]: time="2024-07-02T08:20:16.064808985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:16.067009 containerd[1694]: time="2024-07-02T08:20:16.066039547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 5.259290706s" Jul 2 08:20:16.067009 containerd[1694]: time="2024-07-02T08:20:16.066080787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 08:20:16.067629 containerd[1694]: time="2024-07-02T08:20:16.067479989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 08:20:16.108619 containerd[1694]: time="2024-07-02T08:20:16.108398661Z" level=info msg="CreateContainer within sandbox \"3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 08:20:16.159210 containerd[1694]: time="2024-07-02T08:20:16.159110949Z" level=info msg="CreateContainer within sandbox \"3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"65884377455852f0ca402c73facdcc227c839d6db714c42eb48bf0bb81b18329\"" Jul 2 08:20:16.159735 containerd[1694]: time="2024-07-02T08:20:16.159682350Z" level=info msg="StartContainer for \"65884377455852f0ca402c73facdcc227c839d6db714c42eb48bf0bb81b18329\"" Jul 2 08:20:16.195569 systemd[1]: Started cri-containerd-65884377455852f0ca402c73facdcc227c839d6db714c42eb48bf0bb81b18329.scope - libcontainer container 65884377455852f0ca402c73facdcc227c839d6db714c42eb48bf0bb81b18329. Jul 2 08:20:16.297054 containerd[1694]: time="2024-07-02T08:20:16.297000910Z" level=info msg="StartContainer for \"65884377455852f0ca402c73facdcc227c839d6db714c42eb48bf0bb81b18329\" returns successfully" Jul 2 08:20:16.837069 kubelet[3189]: I0702 08:20:16.836992 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6ccd77b596-zjlqp" podStartSLOduration=24.917361282 podStartE2EDuration="31.836971693s" podCreationTimestamp="2024-07-02 08:19:45 +0000 UTC" firstStartedPulling="2024-07-02 08:20:09.147247297 +0000 UTC m=+45.636919108" lastFinishedPulling="2024-07-02 08:20:16.066857708 +0000 UTC m=+52.556529519" observedRunningTime="2024-07-02 08:20:16.836488692 +0000 UTC m=+53.326160503" watchObservedRunningTime="2024-07-02 08:20:16.836971693 +0000 UTC m=+53.326643504" Jul 2 08:20:19.500086 containerd[1694]: time="2024-07-02T08:20:19.500029013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:19.504734 containerd[1694]: time="2024-07-02T08:20:19.504688382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 08:20:19.509351 containerd[1694]: time="2024-07-02T08:20:19.509299431Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:19.514697 containerd[1694]: time="2024-07-02T08:20:19.514654321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:19.516066 containerd[1694]: time="2024-07-02T08:20:19.516027044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 3.448426814s" Jul 2 08:20:19.516111 containerd[1694]: time="2024-07-02T08:20:19.516072844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 08:20:19.520326 containerd[1694]: time="2024-07-02T08:20:19.520263852Z" level=info msg="CreateContainer within sandbox \"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 08:20:19.564929 containerd[1694]: time="2024-07-02T08:20:19.564875257Z" level=info msg="CreateContainer within sandbox \"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"72326e4a283d36856814c0b4f599c75bb55e3955b6e867a4883b6a836191913f\"" Jul 2 08:20:19.566320 containerd[1694]: time="2024-07-02T08:20:19.566261780Z" level=info msg="StartContainer for \"72326e4a283d36856814c0b4f599c75bb55e3955b6e867a4883b6a836191913f\"" Jul 2 08:20:19.608381 systemd[1]: Started cri-containerd-72326e4a283d36856814c0b4f599c75bb55e3955b6e867a4883b6a836191913f.scope - libcontainer container 72326e4a283d36856814c0b4f599c75bb55e3955b6e867a4883b6a836191913f. Jul 2 08:20:19.667285 containerd[1694]: time="2024-07-02T08:20:19.667222694Z" level=info msg="StartContainer for \"72326e4a283d36856814c0b4f599c75bb55e3955b6e867a4883b6a836191913f\" returns successfully" Jul 2 08:20:19.725628 kubelet[3189]: I0702 08:20:19.725550 3189 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 08:20:19.725628 kubelet[3189]: I0702 08:20:19.725596 3189 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 08:20:19.836304 kubelet[3189]: I0702 08:20:19.836099 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cvsr9" podStartSLOduration=24.417068773 podStartE2EDuration="34.836077098s" podCreationTimestamp="2024-07-02 08:19:45 +0000 UTC" firstStartedPulling="2024-07-02 08:20:09.09765224 +0000 UTC m=+45.587324051" lastFinishedPulling="2024-07-02 08:20:19.516660605 +0000 UTC m=+56.006332376" observedRunningTime="2024-07-02 08:20:19.833415693 +0000 UTC m=+56.323087504" watchObservedRunningTime="2024-07-02 08:20:19.836077098 +0000 UTC m=+56.325748909" Jul 2 08:20:20.078801 kubelet[3189]: I0702 08:20:20.078737 3189 topology_manager.go:215] "Topology Admit Handler" podUID="ec68846b-a10b-4244-adf4-a6283f0ddc0e" podNamespace="calico-apiserver" podName="calico-apiserver-f495f97bb-vqcxz" Jul 2 08:20:20.089644 systemd[1]: Created slice kubepods-besteffort-podec68846b_a10b_4244_adf4_a6283f0ddc0e.slice - libcontainer container kubepods-besteffort-podec68846b_a10b_4244_adf4_a6283f0ddc0e.slice. Jul 2 08:20:20.099823 kubelet[3189]: I0702 08:20:20.099461 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g4wv\" (UniqueName: \"kubernetes.io/projected/ec68846b-a10b-4244-adf4-a6283f0ddc0e-kube-api-access-7g4wv\") pod \"calico-apiserver-f495f97bb-vqcxz\" (UID: \"ec68846b-a10b-4244-adf4-a6283f0ddc0e\") " pod="calico-apiserver/calico-apiserver-f495f97bb-vqcxz" Jul 2 08:20:20.099823 kubelet[3189]: I0702 08:20:20.099511 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec68846b-a10b-4244-adf4-a6283f0ddc0e-calico-apiserver-certs\") pod \"calico-apiserver-f495f97bb-vqcxz\" (UID: \"ec68846b-a10b-4244-adf4-a6283f0ddc0e\") " pod="calico-apiserver/calico-apiserver-f495f97bb-vqcxz" Jul 2 08:20:20.115540 kubelet[3189]: I0702 08:20:20.114393 3189 topology_manager.go:215] "Topology Admit Handler" podUID="6a59a325-3eac-4e23-a96a-5bf4bb616603" podNamespace="calico-apiserver" podName="calico-apiserver-f495f97bb-nkprr" Jul 2 08:20:20.124608 systemd[1]: Created slice kubepods-besteffort-pod6a59a325_3eac_4e23_a96a_5bf4bb616603.slice - libcontainer container kubepods-besteffort-pod6a59a325_3eac_4e23_a96a_5bf4bb616603.slice. Jul 2 08:20:20.200577 kubelet[3189]: I0702 08:20:20.200527 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qkq\" (UniqueName: \"kubernetes.io/projected/6a59a325-3eac-4e23-a96a-5bf4bb616603-kube-api-access-x2qkq\") pod \"calico-apiserver-f495f97bb-nkprr\" (UID: \"6a59a325-3eac-4e23-a96a-5bf4bb616603\") " pod="calico-apiserver/calico-apiserver-f495f97bb-nkprr" Jul 2 08:20:20.200577 kubelet[3189]: I0702 08:20:20.200579 3189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a59a325-3eac-4e23-a96a-5bf4bb616603-calico-apiserver-certs\") pod \"calico-apiserver-f495f97bb-nkprr\" (UID: \"6a59a325-3eac-4e23-a96a-5bf4bb616603\") " pod="calico-apiserver/calico-apiserver-f495f97bb-nkprr" Jul 2 08:20:20.201019 kubelet[3189]: E0702 08:20:20.200952 3189 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:20:20.201019 kubelet[3189]: E0702 08:20:20.201015 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec68846b-a10b-4244-adf4-a6283f0ddc0e-calico-apiserver-certs podName:ec68846b-a10b-4244-adf4-a6283f0ddc0e nodeName:}" failed. No retries permitted until 2024-07-02 08:20:20.700997159 +0000 UTC m=+57.190668970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ec68846b-a10b-4244-adf4-a6283f0ddc0e-calico-apiserver-certs") pod "calico-apiserver-f495f97bb-vqcxz" (UID: "ec68846b-a10b-4244-adf4-a6283f0ddc0e") : secret "calico-apiserver-certs" not found Jul 2 08:20:20.301067 kubelet[3189]: E0702 08:20:20.301026 3189 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:20:20.301547 kubelet[3189]: E0702 08:20:20.301387 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a59a325-3eac-4e23-a96a-5bf4bb616603-calico-apiserver-certs podName:6a59a325-3eac-4e23-a96a-5bf4bb616603 nodeName:}" failed. No retries permitted until 2024-07-02 08:20:20.801365991 +0000 UTC m=+57.291037802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6a59a325-3eac-4e23-a96a-5bf4bb616603-calico-apiserver-certs") pod "calico-apiserver-f495f97bb-nkprr" (UID: "6a59a325-3eac-4e23-a96a-5bf4bb616603") : secret "calico-apiserver-certs" not found Jul 2 08:20:20.703436 kubelet[3189]: E0702 08:20:20.703386 3189 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:20:20.703602 kubelet[3189]: E0702 08:20:20.703479 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec68846b-a10b-4244-adf4-a6283f0ddc0e-calico-apiserver-certs podName:ec68846b-a10b-4244-adf4-a6283f0ddc0e nodeName:}" failed. No retries permitted until 2024-07-02 08:20:21.703453363 +0000 UTC m=+58.193125174 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ec68846b-a10b-4244-adf4-a6283f0ddc0e-calico-apiserver-certs") pod "calico-apiserver-f495f97bb-vqcxz" (UID: "ec68846b-a10b-4244-adf4-a6283f0ddc0e") : secret "calico-apiserver-certs" not found Jul 2 08:20:20.804167 kubelet[3189]: E0702 08:20:20.804115 3189 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:20:20.804585 kubelet[3189]: E0702 08:20:20.804194 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a59a325-3eac-4e23-a96a-5bf4bb616603-calico-apiserver-certs podName:6a59a325-3eac-4e23-a96a-5bf4bb616603 nodeName:}" failed. No retries permitted until 2024-07-02 08:20:21.804178317 +0000 UTC m=+58.293850128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6a59a325-3eac-4e23-a96a-5bf4bb616603-calico-apiserver-certs") pod "calico-apiserver-f495f97bb-nkprr" (UID: "6a59a325-3eac-4e23-a96a-5bf4bb616603") : secret "calico-apiserver-certs" not found Jul 2 08:20:21.898339 containerd[1694]: time="2024-07-02T08:20:21.896598414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f495f97bb-vqcxz,Uid:ec68846b-a10b-4244-adf4-a6283f0ddc0e,Namespace:calico-apiserver,Attempt:0,}" Jul 2 08:20:21.928853 containerd[1694]: time="2024-07-02T08:20:21.928758835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f495f97bb-nkprr,Uid:6a59a325-3eac-4e23-a96a-5bf4bb616603,Namespace:calico-apiserver,Attempt:0,}" Jul 2 08:20:22.123845 systemd-networkd[1332]: calie5171cf12ca: Link UP Jul 2 08:20:22.126672 systemd-networkd[1332]: calie5171cf12ca: Gained carrier Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.011 [INFO][5062] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0 calico-apiserver-f495f97bb- calico-apiserver ec68846b-a10b-4244-adf4-a6283f0ddc0e 837 0 2024-07-02 08:20:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f495f97bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-7c4c792b73 calico-apiserver-f495f97bb-vqcxz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5171cf12ca [] []}} ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.012 [INFO][5062] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.060 [INFO][5087] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" HandleID="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.079 [INFO][5087] ipam_plugin.go 264: Auto assigning IP ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" HandleID="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003162c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-7c4c792b73", "pod":"calico-apiserver-f495f97bb-vqcxz", "timestamp":"2024-07-02 08:20:22.060570688 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7c4c792b73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.079 [INFO][5087] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.079 [INFO][5087] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.080 [INFO][5087] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7c4c792b73' Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.082 [INFO][5087] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.088 [INFO][5087] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.093 [INFO][5087] ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.096 [INFO][5087] ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.098 [INFO][5087] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.098 [INFO][5087] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.100 [INFO][5087] ipam.go 1685: Creating new handle: k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.105 [INFO][5087] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.112 [INFO][5087] ipam.go 1216: Successfully claimed IPs: [192.168.65.69/26] block=192.168.65.64/26 handle="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.112 [INFO][5087] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.69/26] handle="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.112 [INFO][5087] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:22.151880 containerd[1694]: 2024-07-02 08:20:22.112 [INFO][5087] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.65.69/26] IPv6=[] ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" HandleID="k8s-pod-network.6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.153884 containerd[1694]: 2024-07-02 08:20:22.115 [INFO][5062] k8s.go 386: Populated endpoint ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0", GenerateName:"calico-apiserver-f495f97bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec68846b-a10b-4244-adf4-a6283f0ddc0e", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f495f97bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"", Pod:"calico-apiserver-f495f97bb-vqcxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5171cf12ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:22.153884 containerd[1694]: 2024-07-02 08:20:22.115 [INFO][5062] k8s.go 387: Calico CNI using IPs: [192.168.65.69/32] ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.153884 containerd[1694]: 2024-07-02 08:20:22.116 [INFO][5062] dataplane_linux.go 68: Setting the host side veth name to calie5171cf12ca ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.153884 containerd[1694]: 2024-07-02 08:20:22.129 [INFO][5062] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.153884 containerd[1694]: 2024-07-02 08:20:22.129 [INFO][5062] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0", GenerateName:"calico-apiserver-f495f97bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec68846b-a10b-4244-adf4-a6283f0ddc0e", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f495f97bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea", Pod:"calico-apiserver-f495f97bb-vqcxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5171cf12ca", MAC:"02:54:3e:84:1a:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:22.153884 containerd[1694]: 2024-07-02 08:20:22.145 [INFO][5062] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-vqcxz" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--vqcxz-eth0" Jul 2 08:20:22.201180 systemd-networkd[1332]: cali32ab0dd1f31: Link UP Jul 2 08:20:22.202849 systemd-networkd[1332]: cali32ab0dd1f31: Gained carrier Jul 2 08:20:22.211073 containerd[1694]: time="2024-07-02T08:20:22.209149094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:20:22.211073 containerd[1694]: time="2024-07-02T08:20:22.209227774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:22.211073 containerd[1694]: time="2024-07-02T08:20:22.209247494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:20:22.211073 containerd[1694]: time="2024-07-02T08:20:22.209261174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.042 [INFO][5076] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0 calico-apiserver-f495f97bb- calico-apiserver 6a59a325-3eac-4e23-a96a-5bf4bb616603 844 0 2024-07-02 08:20:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f495f97bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-7c4c792b73 calico-apiserver-f495f97bb-nkprr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32ab0dd1f31 [] []}} ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.043 [INFO][5076] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.095 [INFO][5093] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" HandleID="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.111 [INFO][5093] ipam_plugin.go 264: Auto assigning IP ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" HandleID="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-7c4c792b73", "pod":"calico-apiserver-f495f97bb-nkprr", "timestamp":"2024-07-02 08:20:22.095886796 +0000 UTC"}, Hostname:"ci-3975.1.1-a-7c4c792b73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.111 [INFO][5093] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.113 [INFO][5093] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.113 [INFO][5093] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-7c4c792b73' Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.117 [INFO][5093] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.133 [INFO][5093] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.146 [INFO][5093] ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.158 [INFO][5093] ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.162 [INFO][5093] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.162 [INFO][5093] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.165 [INFO][5093] ipam.go 1685: Creating new handle: k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17 Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.176 [INFO][5093] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.187 [INFO][5093] ipam.go 1216: Successfully claimed IPs: [192.168.65.70/26] block=192.168.65.64/26 handle="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.187 [INFO][5093] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.70/26] handle="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" host="ci-3975.1.1-a-7c4c792b73" Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.188 [INFO][5093] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:22.220788 containerd[1694]: 2024-07-02 08:20:22.188 [INFO][5093] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.65.70/26] IPv6=[] ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" HandleID="k8s-pod-network.429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.221934 containerd[1694]: 2024-07-02 08:20:22.195 [INFO][5076] k8s.go 386: Populated endpoint ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0", GenerateName:"calico-apiserver-f495f97bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a59a325-3eac-4e23-a96a-5bf4bb616603", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f495f97bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"", Pod:"calico-apiserver-f495f97bb-nkprr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32ab0dd1f31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:22.221934 containerd[1694]: 2024-07-02 08:20:22.196 [INFO][5076] k8s.go 387: Calico CNI using IPs: [192.168.65.70/32] ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.221934 containerd[1694]: 2024-07-02 08:20:22.196 [INFO][5076] dataplane_linux.go 68: Setting the host side veth name to cali32ab0dd1f31 ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.221934 containerd[1694]: 2024-07-02 08:20:22.202 [INFO][5076] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.221934 containerd[1694]: 2024-07-02 08:20:22.204 [INFO][5076] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0", GenerateName:"calico-apiserver-f495f97bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a59a325-3eac-4e23-a96a-5bf4bb616603", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 20, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f495f97bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17", Pod:"calico-apiserver-f495f97bb-nkprr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32ab0dd1f31", MAC:"2e:be:d5:a8:32:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:22.221934 containerd[1694]: 2024-07-02 08:20:22.215 [INFO][5076] k8s.go 500: Wrote updated endpoint to datastore ContainerID="429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17" Namespace="calico-apiserver" Pod="calico-apiserver-f495f97bb-nkprr" WorkloadEndpoint="ci--3975.1.1--a--7c4c792b73-k8s-calico--apiserver--f495f97bb--nkprr-eth0" Jul 2 08:20:22.252046 systemd[1]: Started cri-containerd-6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea.scope - libcontainer container 6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea. Jul 2 08:20:22.272988 containerd[1694]: time="2024-07-02T08:20:22.272591296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:20:22.272988 containerd[1694]: time="2024-07-02T08:20:22.272866816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:22.273484 containerd[1694]: time="2024-07-02T08:20:22.273389897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:20:22.273484 containerd[1694]: time="2024-07-02T08:20:22.273417817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:20:22.299562 systemd[1]: Started cri-containerd-429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17.scope - libcontainer container 429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17. Jul 2 08:20:22.320238 containerd[1694]: time="2024-07-02T08:20:22.320056547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f495f97bb-vqcxz,Uid:ec68846b-a10b-4244-adf4-a6283f0ddc0e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea\"" Jul 2 08:20:22.323995 containerd[1694]: time="2024-07-02T08:20:22.323950234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 08:20:22.364391 containerd[1694]: time="2024-07-02T08:20:22.364254031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f495f97bb-nkprr,Uid:6a59a325-3eac-4e23-a96a-5bf4bb616603,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17\"" Jul 2 08:20:23.657305 containerd[1694]: time="2024-07-02T08:20:23.656970313Z" level=info msg="StopPodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\"" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.706 [WARNING][5232] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0", GenerateName:"calico-kube-controllers-6ccd77b596-", Namespace:"calico-system", SelfLink:"", UID:"564b4aa8-7677-4a18-9b2e-7b70c8540c90", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd77b596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418", Pod:"calico-kube-controllers-6ccd77b596-zjlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47145dfdfaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.707 [INFO][5232] k8s.go 608: Cleaning up netns ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.707 [INFO][5232] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" iface="eth0" netns="" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.707 [INFO][5232] k8s.go 615: Releasing IP address(es) ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.707 [INFO][5232] utils.go 188: Calico CNI releasing IP address ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.737 [INFO][5238] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.737 [INFO][5238] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.737 [INFO][5238] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.747 [WARNING][5238] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.748 [INFO][5238] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.749 [INFO][5238] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:23.753890 containerd[1694]: 2024-07-02 08:20:23.752 [INFO][5232] k8s.go 621: Teardown processing complete. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.753890 containerd[1694]: time="2024-07-02T08:20:23.753821899Z" level=info msg="TearDown network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" successfully" Jul 2 08:20:23.753890 containerd[1694]: time="2024-07-02T08:20:23.753850939Z" level=info msg="StopPodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" returns successfully" Jul 2 08:20:23.754943 containerd[1694]: time="2024-07-02T08:20:23.754905381Z" level=info msg="RemovePodSandbox for \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\"" Jul 2 08:20:23.754993 containerd[1694]: time="2024-07-02T08:20:23.754950621Z" level=info msg="Forcibly stopping sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\"" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.802 [WARNING][5256] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0", GenerateName:"calico-kube-controllers-6ccd77b596-", Namespace:"calico-system", SelfLink:"", UID:"564b4aa8-7677-4a18-9b2e-7b70c8540c90", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccd77b596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"3746b550f0a8a446b302663c1a865acc243ef404334a9bad7aaac7b897079418", Pod:"calico-kube-controllers-6ccd77b596-zjlqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47145dfdfaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.802 [INFO][5256] k8s.go 608: Cleaning up netns ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.802 [INFO][5256] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" iface="eth0" netns="" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.802 [INFO][5256] k8s.go 615: Releasing IP address(es) ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.802 [INFO][5256] utils.go 188: Calico CNI releasing IP address ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.832 [INFO][5263] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.832 [INFO][5263] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.832 [INFO][5263] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.841 [WARNING][5263] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.841 [INFO][5263] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" HandleID="k8s-pod-network.e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Workload="ci--3975.1.1--a--7c4c792b73-k8s-calico--kube--controllers--6ccd77b596--zjlqp-eth0" Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.843 [INFO][5263] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:23.847664 containerd[1694]: 2024-07-02 08:20:23.845 [INFO][5256] k8s.go 621: Teardown processing complete. ContainerID="e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88" Jul 2 08:20:23.847664 containerd[1694]: time="2024-07-02T08:20:23.847645319Z" level=info msg="TearDown network for sandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" successfully" Jul 2 08:20:23.859018 containerd[1694]: time="2024-07-02T08:20:23.858808901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:20:23.859018 containerd[1694]: time="2024-07-02T08:20:23.858885701Z" level=info msg="RemovePodSandbox \"e4dcd356e72407d7b491075013d62c300f279a5a403d4d524ee0ef7684fc4e88\" returns successfully" Jul 2 08:20:23.860270 containerd[1694]: time="2024-07-02T08:20:23.859921223Z" level=info msg="StopPodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\"" Jul 2 08:20:23.979666 systemd-networkd[1332]: calie5171cf12ca: Gained IPv6LL Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.919 [WARNING][5281] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec1ab198-8e93-4749-942e-804a7ceb88e7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e", Pod:"csi-node-driver-cvsr9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali196a473fc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.920 [INFO][5281] k8s.go 608: Cleaning up netns ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.920 [INFO][5281] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" iface="eth0" netns="" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.920 [INFO][5281] k8s.go 615: Releasing IP address(es) ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.920 [INFO][5281] utils.go 188: Calico CNI releasing IP address ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.957 [INFO][5288] ipam_plugin.go 411: Releasing address using handleID ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.957 [INFO][5288] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.957 [INFO][5288] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.971 [WARNING][5288] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.971 [INFO][5288] ipam_plugin.go 439: Releasing address using workloadID ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.974 [INFO][5288] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:23.981582 containerd[1694]: 2024-07-02 08:20:23.976 [INFO][5281] k8s.go 621: Teardown processing complete. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:23.981582 containerd[1694]: time="2024-07-02T08:20:23.979825493Z" level=info msg="TearDown network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" successfully" Jul 2 08:20:23.981582 containerd[1694]: time="2024-07-02T08:20:23.979851093Z" level=info msg="StopPodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" returns successfully" Jul 2 08:20:23.981582 containerd[1694]: time="2024-07-02T08:20:23.980797175Z" level=info msg="RemovePodSandbox for \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\"" Jul 2 08:20:23.981582 containerd[1694]: time="2024-07-02T08:20:23.980830535Z" level=info msg="Forcibly stopping sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\"" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.033 [WARNING][5306] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec1ab198-8e93-4749-942e-804a7ceb88e7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"c184ceed81fab627c6243d6e05627524f4a8a90025dfda6d6cb8113f10ff576e", Pod:"csi-node-driver-cvsr9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali196a473fc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.033 [INFO][5306] k8s.go 608: Cleaning up netns ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.033 [INFO][5306] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" iface="eth0" netns="" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.033 [INFO][5306] k8s.go 615: Releasing IP address(es) ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.033 [INFO][5306] utils.go 188: Calico CNI releasing IP address ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.055 [INFO][5312] ipam_plugin.go 411: Releasing address using handleID ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.055 [INFO][5312] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.055 [INFO][5312] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.063 [WARNING][5312] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.064 [INFO][5312] ipam_plugin.go 439: Releasing address using workloadID ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" HandleID="k8s-pod-network.08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Workload="ci--3975.1.1--a--7c4c792b73-k8s-csi--node--driver--cvsr9-eth0" Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.065 [INFO][5312] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:24.068658 containerd[1694]: 2024-07-02 08:20:24.066 [INFO][5306] k8s.go 621: Teardown processing complete. ContainerID="08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d" Jul 2 08:20:24.069152 containerd[1694]: time="2024-07-02T08:20:24.068815824Z" level=info msg="TearDown network for sandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" successfully" Jul 2 08:20:24.170428 systemd-networkd[1332]: cali32ab0dd1f31: Gained IPv6LL Jul 2 08:20:24.254592 containerd[1694]: time="2024-07-02T08:20:24.253954579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:20:24.254592 containerd[1694]: time="2024-07-02T08:20:24.254037139Z" level=info msg="RemovePodSandbox \"08fdd21fccbd765d28f5b4c7c703dc0d8279f3df90bda268556b72cb13dcfe3d\" returns successfully" Jul 2 08:20:24.264619 containerd[1694]: time="2024-07-02T08:20:24.264576280Z" level=info msg="StopPodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\"" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.330 [WARNING][5334] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fc8c61fa-97a4-4862-b730-0646754d9bdf", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4", Pod:"coredns-7db6d8ff4d-87z6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78f5facd66d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.331 [INFO][5334] k8s.go 608: Cleaning up netns ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.332 [INFO][5334] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" iface="eth0" netns="" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.332 [INFO][5334] k8s.go 615: Releasing IP address(es) ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.332 [INFO][5334] utils.go 188: Calico CNI releasing IP address ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.364 [INFO][5341] ipam_plugin.go 411: Releasing address using handleID ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.364 [INFO][5341] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.364 [INFO][5341] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.374 [WARNING][5341] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.374 [INFO][5341] ipam_plugin.go 439: Releasing address using workloadID ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.376 [INFO][5341] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:24.382391 containerd[1694]: 2024-07-02 08:20:24.378 [INFO][5334] k8s.go 621: Teardown processing complete. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.382816 containerd[1694]: time="2024-07-02T08:20:24.382647586Z" level=info msg="TearDown network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" successfully" Jul 2 08:20:24.382816 containerd[1694]: time="2024-07-02T08:20:24.382677426Z" level=info msg="StopPodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" returns successfully" Jul 2 08:20:24.383660 containerd[1694]: time="2024-07-02T08:20:24.383625188Z" level=info msg="RemovePodSandbox for \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\"" Jul 2 08:20:24.383854 containerd[1694]: time="2024-07-02T08:20:24.383754588Z" level=info msg="Forcibly stopping sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\"" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.456 [WARNING][5360] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fc8c61fa-97a4-4862-b730-0646754d9bdf", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"c81174c571af281e6dbda070bd90be962e8c5bc9fa507f393821a6bcb7280ab4", Pod:"coredns-7db6d8ff4d-87z6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78f5facd66d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.457 [INFO][5360] k8s.go 608: Cleaning up netns ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.457 [INFO][5360] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" iface="eth0" netns="" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.457 [INFO][5360] k8s.go 615: Releasing IP address(es) ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.457 [INFO][5360] utils.go 188: Calico CNI releasing IP address ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.507 [INFO][5366] ipam_plugin.go 411: Releasing address using handleID ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.508 [INFO][5366] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.508 [INFO][5366] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.518 [WARNING][5366] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.518 [INFO][5366] ipam_plugin.go 439: Releasing address using workloadID ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" HandleID="k8s-pod-network.79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--87z6p-eth0" Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.519 [INFO][5366] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:24.523421 containerd[1694]: 2024-07-02 08:20:24.521 [INFO][5360] k8s.go 621: Teardown processing complete. ContainerID="79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda" Jul 2 08:20:24.523421 containerd[1694]: time="2024-07-02T08:20:24.523364256Z" level=info msg="TearDown network for sandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" successfully" Jul 2 08:20:24.537071 containerd[1694]: time="2024-07-02T08:20:24.536807842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:20:24.537071 containerd[1694]: time="2024-07-02T08:20:24.536898762Z" level=info msg="RemovePodSandbox \"79c605716efa0b39e1ef1c30663cfdbeb58b1623cd08635060b5916129e3ddda\" returns successfully" Jul 2 08:20:24.537665 containerd[1694]: time="2024-07-02T08:20:24.537612124Z" level=info msg="StopPodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\"" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.590 [WARNING][5385] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0622a859-f890-42d7-963e-91f435085671", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e", Pod:"coredns-7db6d8ff4d-sbktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f319f368e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.590 [INFO][5385] k8s.go 608: Cleaning up netns ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.590 [INFO][5385] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" iface="eth0" netns="" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.590 [INFO][5385] k8s.go 615: Releasing IP address(es) ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.590 [INFO][5385] utils.go 188: Calico CNI releasing IP address ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.625 [INFO][5391] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.625 [INFO][5391] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.625 [INFO][5391] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.637 [WARNING][5391] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.637 [INFO][5391] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.640 [INFO][5391] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:24.643902 containerd[1694]: 2024-07-02 08:20:24.641 [INFO][5385] k8s.go 621: Teardown processing complete. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.644510 containerd[1694]: time="2024-07-02T08:20:24.644432929Z" level=info msg="TearDown network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" successfully" Jul 2 08:20:24.644510 containerd[1694]: time="2024-07-02T08:20:24.644526089Z" level=info msg="StopPodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" returns successfully" Jul 2 08:20:24.645206 containerd[1694]: time="2024-07-02T08:20:24.645036690Z" level=info msg="RemovePodSandbox for \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\"" Jul 2 08:20:24.645206 containerd[1694]: time="2024-07-02T08:20:24.645075330Z" level=info msg="Forcibly stopping sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\"" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.707 [WARNING][5409] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0622a859-f890-42d7-963e-91f435085671", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-7c4c792b73", ContainerID:"231b92b0dc9afd487976fe9dd119f7d8e5f44666ffc733acc6277bde795ec06e", Pod:"coredns-7db6d8ff4d-sbktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f319f368e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.707 [INFO][5409] k8s.go 608: Cleaning up netns ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.707 [INFO][5409] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" iface="eth0" netns="" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.707 [INFO][5409] k8s.go 615: Releasing IP address(es) ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.707 [INFO][5409] utils.go 188: Calico CNI releasing IP address ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.738 [INFO][5416] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.740 [INFO][5416] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.740 [INFO][5416] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.749 [WARNING][5416] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.749 [INFO][5416] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" HandleID="k8s-pod-network.3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Workload="ci--3975.1.1--a--7c4c792b73-k8s-coredns--7db6d8ff4d--sbktf-eth0" Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.751 [INFO][5416] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:20:24.754677 containerd[1694]: 2024-07-02 08:20:24.752 [INFO][5409] k8s.go 621: Teardown processing complete. ContainerID="3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82" Jul 2 08:20:24.755471 containerd[1694]: time="2024-07-02T08:20:24.754958261Z" level=info msg="TearDown network for sandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" successfully" Jul 2 08:20:24.977802 containerd[1694]: time="2024-07-02T08:20:24.977649209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:20:24.977802 containerd[1694]: time="2024-07-02T08:20:24.977727609Z" level=info msg="RemovePodSandbox \"3d9884c6fe377d55420301b6dbd87a3a3807a347c3135617626e20c8c9c22b82\" returns successfully" Jul 2 08:20:25.071242 containerd[1694]: time="2024-07-02T08:20:25.070663747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 08:20:25.071242 containerd[1694]: time="2024-07-02T08:20:25.070780667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:25.073098 containerd[1694]: time="2024-07-02T08:20:25.073020672Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:25.080013 containerd[1694]: time="2024-07-02T08:20:25.079952405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:25.080851 containerd[1694]: time="2024-07-02T08:20:25.080727486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.756540531s" Jul 2 08:20:25.080995 containerd[1694]: time="2024-07-02T08:20:25.080976607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 08:20:25.084118 containerd[1694]: time="2024-07-02T08:20:25.084076773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 08:20:25.089847 containerd[1694]: time="2024-07-02T08:20:25.089797384Z" level=info msg="CreateContainer within sandbox \"6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 08:20:25.151974 containerd[1694]: time="2024-07-02T08:20:25.151891823Z" level=info msg="CreateContainer within sandbox \"6c1d967ff7fe7e3a8db734b88f0451993a575d268ca9ffe5f4f23233da5afeea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f5b0d66ca0ed8d10cd874fa4353c20ccf783659284465fb0e82642f8a93019bb\"" Jul 2 08:20:25.155412 containerd[1694]: time="2024-07-02T08:20:25.155118109Z" level=info msg="StartContainer for \"f5b0d66ca0ed8d10cd874fa4353c20ccf783659284465fb0e82642f8a93019bb\"" Jul 2 08:20:25.197547 systemd[1]: Started cri-containerd-f5b0d66ca0ed8d10cd874fa4353c20ccf783659284465fb0e82642f8a93019bb.scope - libcontainer container f5b0d66ca0ed8d10cd874fa4353c20ccf783659284465fb0e82642f8a93019bb. Jul 2 08:20:25.236802 containerd[1694]: time="2024-07-02T08:20:25.236290985Z" level=info msg="StartContainer for \"f5b0d66ca0ed8d10cd874fa4353c20ccf783659284465fb0e82642f8a93019bb\" returns successfully" Jul 2 08:20:25.395703 containerd[1694]: time="2024-07-02T08:20:25.395642691Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:20:25.403499 containerd[1694]: time="2024-07-02T08:20:25.403447466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jul 2 08:20:25.411277 containerd[1694]: time="2024-07-02T08:20:25.411227881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 327.108428ms" Jul 2 08:20:25.411489 containerd[1694]: time="2024-07-02T08:20:25.411341121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 08:20:25.414420 containerd[1694]: time="2024-07-02T08:20:25.414045646Z" level=info msg="CreateContainer within sandbox \"429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 08:20:25.455848 containerd[1694]: time="2024-07-02T08:20:25.455600246Z" level=info msg="CreateContainer within sandbox \"429d2c218f8def968c2e74ffa90306f120765a52f949a1b8572a180e30b2ef17\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8cb2e5f5adea76b3dd79790a6ada5249524f7f51a053c1c13c91f8911c4073e0\"" Jul 2 08:20:25.457912 containerd[1694]: time="2024-07-02T08:20:25.457734010Z" level=info msg="StartContainer for \"8cb2e5f5adea76b3dd79790a6ada5249524f7f51a053c1c13c91f8911c4073e0\"" Jul 2 08:20:25.488966 systemd[1]: Started cri-containerd-8cb2e5f5adea76b3dd79790a6ada5249524f7f51a053c1c13c91f8911c4073e0.scope - libcontainer container 8cb2e5f5adea76b3dd79790a6ada5249524f7f51a053c1c13c91f8911c4073e0. Jul 2 08:20:25.531954 containerd[1694]: time="2024-07-02T08:20:25.531903433Z" level=info msg="StartContainer for \"8cb2e5f5adea76b3dd79790a6ada5249524f7f51a053c1c13c91f8911c4073e0\" returns successfully" Jul 2 08:20:25.827483 update_engine[1662]: I0702 08:20:25.827337 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:20:25.827809 update_engine[1662]: I0702 08:20:25.827605 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:20:25.828083 update_engine[1662]: I0702 08:20:25.827828 1662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:20:25.842524 update_engine[1662]: E0702 08:20:25.842478 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:20:25.842674 update_engine[1662]: I0702 08:20:25.842553 1662 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 08:20:25.884576 kubelet[3189]: I0702 08:20:25.884503 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f495f97bb-nkprr" podStartSLOduration=2.83840988 podStartE2EDuration="5.884475808s" podCreationTimestamp="2024-07-02 08:20:20 +0000 UTC" firstStartedPulling="2024-07-02 08:20:22.366037755 +0000 UTC m=+58.855709566" lastFinishedPulling="2024-07-02 08:20:25.412103683 +0000 UTC m=+61.901775494" observedRunningTime="2024-07-02 08:20:25.873195678 +0000 UTC m=+62.362867529" watchObservedRunningTime="2024-07-02 08:20:25.884475808 +0000 UTC m=+62.374147619" Jul 2 08:20:25.885036 kubelet[3189]: I0702 08:20:25.884816 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f495f97bb-vqcxz" podStartSLOduration=3.123855788 podStartE2EDuration="5.884809688s" podCreationTimestamp="2024-07-02 08:20:20 +0000 UTC" firstStartedPulling="2024-07-02 08:20:22.322856312 +0000 UTC m=+58.812528123" lastFinishedPulling="2024-07-02 08:20:25.083810212 +0000 UTC m=+61.573482023" observedRunningTime="2024-07-02 08:20:25.883485567 +0000 UTC m=+62.373157378" watchObservedRunningTime="2024-07-02 08:20:25.884809688 +0000 UTC m=+62.374481459" Jul 2 08:20:26.863697 kubelet[3189]: I0702 08:20:26.863656 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:20:26.864340 kubelet[3189]: I0702 08:20:26.863656 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:20:35.835530 update_engine[1662]: I0702 08:20:35.835480 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:20:35.836024 update_engine[1662]: I0702 08:20:35.835666 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:20:35.836024 update_engine[1662]: I0702 08:20:35.835883 1662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:20:35.842248 update_engine[1662]: E0702 08:20:35.842218 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:20:35.842356 update_engine[1662]: I0702 08:20:35.842278 1662 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 08:20:39.216431 kubelet[3189]: I0702 08:20:39.215757 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:20:45.835272 update_engine[1662]: I0702 08:20:45.835160 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:20:45.835674 update_engine[1662]: I0702 08:20:45.835455 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:20:45.835702 update_engine[1662]: I0702 08:20:45.835690 1662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:20:45.843228 update_engine[1662]: E0702 08:20:45.843194 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:20:45.843379 update_engine[1662]: I0702 08:20:45.843254 1662 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 08:20:45.843379 update_engine[1662]: I0702 08:20:45.843259 1662 omaha_request_action.cc:617] Omaha request response: Jul 2 08:20:45.843379 update_engine[1662]: E0702 08:20:45.843360 1662 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 08:20:45.843379 update_engine[1662]: I0702 08:20:45.843376 1662 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 08:20:45.843379 update_engine[1662]: I0702 08:20:45.843380 1662 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843382 1662 update_attempter.cc:306] Processing Done. Jul 2 08:20:45.843511 update_engine[1662]: E0702 08:20:45.843395 1662 update_attempter.cc:619] Update failed. Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843398 1662 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843400 1662 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843405 1662 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843468 1662 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843487 1662 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843490 1662 omaha_request_action.cc:272] Request: Jul 2 08:20:45.843511 update_engine[1662]: Jul 2 08:20:45.843511 update_engine[1662]: Jul 2 08:20:45.843511 update_engine[1662]: Jul 2 08:20:45.843511 update_engine[1662]: Jul 2 08:20:45.843511 update_engine[1662]: Jul 2 08:20:45.843511 update_engine[1662]: Jul 2 08:20:45.843511 update_engine[1662]: I0702 08:20:45.843494 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:20:45.843792 update_engine[1662]: I0702 08:20:45.843611 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:20:45.843950 update_engine[1662]: I0702 08:20:45.843812 1662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:20:45.844107 locksmithd[1710]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 08:20:45.862128 update_engine[1662]: E0702 08:20:45.862095 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862162 1662 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862167 1662 omaha_request_action.cc:617] Omaha request response: Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862171 1662 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862174 1662 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862177 1662 update_attempter.cc:306] Processing Done. Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862181 1662 update_attempter.cc:310] Error event sent. Jul 2 08:20:45.862200 update_engine[1662]: I0702 08:20:45.862190 1662 update_check_scheduler.cc:74] Next update check in 42m44s Jul 2 08:20:45.862584 locksmithd[1710]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 08:20:47.184770 kubelet[3189]: I0702 08:20:47.184486 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:21:19.415602 systemd[1]: Started sshd@7-10.200.20.44:22-10.200.16.10:60296.service - OpenSSH per-connection server daemon (10.200.16.10:60296). Jul 2 08:21:19.860552 sshd[5663]: Accepted publickey for core from 10.200.16.10 port 60296 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:19.862831 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:19.867346 systemd-logind[1658]: New session 10 of user core. Jul 2 08:21:19.871505 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:21:20.350242 sshd[5663]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:20.354706 systemd[1]: sshd@7-10.200.20.44:22-10.200.16.10:60296.service: Deactivated successfully. Jul 2 08:21:20.356782 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:21:20.357801 systemd-logind[1658]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:21:20.358986 systemd-logind[1658]: Removed session 10. Jul 2 08:21:25.431642 systemd[1]: Started sshd@8-10.200.20.44:22-10.200.16.10:60302.service - OpenSSH per-connection server daemon (10.200.16.10:60302). Jul 2 08:21:25.840304 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 60302 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:25.841990 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:25.846257 systemd-logind[1658]: New session 11 of user core. Jul 2 08:21:25.853519 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:21:26.218165 sshd[5704]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:26.222015 systemd-logind[1658]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:21:26.222216 systemd[1]: sshd@8-10.200.20.44:22-10.200.16.10:60302.service: Deactivated successfully. Jul 2 08:21:26.224713 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:21:26.227164 systemd-logind[1658]: Removed session 11. Jul 2 08:21:31.310614 systemd[1]: Started sshd@9-10.200.20.44:22-10.200.16.10:43426.service - OpenSSH per-connection server daemon (10.200.16.10:43426). Jul 2 08:21:31.759623 sshd[5723]: Accepted publickey for core from 10.200.16.10 port 43426 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:31.761094 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:31.765844 systemd-logind[1658]: New session 12 of user core. Jul 2 08:21:31.770500 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:21:32.179730 sshd[5723]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:32.182652 systemd[1]: sshd@9-10.200.20.44:22-10.200.16.10:43426.service: Deactivated successfully. Jul 2 08:21:32.185110 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:21:32.186912 systemd-logind[1658]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:21:32.188603 systemd-logind[1658]: Removed session 12. Jul 2 08:21:32.270285 systemd[1]: Started sshd@10-10.200.20.44:22-10.200.16.10:43432.service - OpenSSH per-connection server daemon (10.200.16.10:43432). Jul 2 08:21:32.688906 sshd[5737]: Accepted publickey for core from 10.200.16.10 port 43432 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:32.690362 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:32.694335 systemd-logind[1658]: New session 13 of user core. Jul 2 08:21:32.700487 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:21:33.099547 sshd[5737]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:33.104447 systemd[1]: sshd@10-10.200.20.44:22-10.200.16.10:43432.service: Deactivated successfully. Jul 2 08:21:33.106523 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:21:33.108596 systemd-logind[1658]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:21:33.109886 systemd-logind[1658]: Removed session 13. Jul 2 08:21:33.181667 systemd[1]: Started sshd@11-10.200.20.44:22-10.200.16.10:43448.service - OpenSSH per-connection server daemon (10.200.16.10:43448). Jul 2 08:21:33.590681 sshd[5749]: Accepted publickey for core from 10.200.16.10 port 43448 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:33.592138 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:33.596113 systemd-logind[1658]: New session 14 of user core. Jul 2 08:21:33.602523 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:21:33.969185 sshd[5749]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:33.972926 systemd[1]: sshd@11-10.200.20.44:22-10.200.16.10:43448.service: Deactivated successfully. Jul 2 08:21:33.976091 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:21:33.977628 systemd-logind[1658]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:21:33.978862 systemd-logind[1658]: Removed session 14. Jul 2 08:21:39.055463 systemd[1]: Started sshd@12-10.200.20.44:22-10.200.16.10:37640.service - OpenSSH per-connection server daemon (10.200.16.10:37640). Jul 2 08:21:39.507404 sshd[5783]: Accepted publickey for core from 10.200.16.10 port 37640 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:39.509637 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:39.514133 systemd-logind[1658]: New session 15 of user core. Jul 2 08:21:39.523494 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:21:39.909546 sshd[5783]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:39.915224 systemd[1]: sshd@12-10.200.20.44:22-10.200.16.10:37640.service: Deactivated successfully. Jul 2 08:21:39.920069 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:21:39.922626 systemd-logind[1658]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:21:39.924168 systemd-logind[1658]: Removed session 15. Jul 2 08:21:44.991163 systemd[1]: Started sshd@13-10.200.20.44:22-10.200.16.10:37642.service - OpenSSH per-connection server daemon (10.200.16.10:37642). Jul 2 08:21:45.440563 sshd[5824]: Accepted publickey for core from 10.200.16.10 port 37642 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:45.442013 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:45.446712 systemd-logind[1658]: New session 16 of user core. Jul 2 08:21:45.452503 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:21:45.837986 sshd[5824]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:45.841680 systemd-logind[1658]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:21:45.842395 systemd[1]: sshd@13-10.200.20.44:22-10.200.16.10:37642.service: Deactivated successfully. Jul 2 08:21:45.845733 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:21:45.846869 systemd-logind[1658]: Removed session 16. Jul 2 08:21:50.918645 systemd[1]: Started sshd@14-10.200.20.44:22-10.200.16.10:50686.service - OpenSSH per-connection server daemon (10.200.16.10:50686). Jul 2 08:21:51.325969 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 50686 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:51.327216 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:51.331619 systemd-logind[1658]: New session 17 of user core. Jul 2 08:21:51.337494 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:21:51.686988 sshd[5842]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:51.689870 systemd-logind[1658]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:21:51.690081 systemd[1]: sshd@14-10.200.20.44:22-10.200.16.10:50686.service: Deactivated successfully. Jul 2 08:21:51.692766 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:21:51.695230 systemd-logind[1658]: Removed session 17. Jul 2 08:21:51.768623 systemd[1]: Started sshd@15-10.200.20.44:22-10.200.16.10:50700.service - OpenSSH per-connection server daemon (10.200.16.10:50700). Jul 2 08:21:52.180079 sshd[5855]: Accepted publickey for core from 10.200.16.10 port 50700 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:52.181599 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:52.185462 systemd-logind[1658]: New session 18 of user core. Jul 2 08:21:52.193494 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:21:52.642925 sshd[5855]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:52.646633 systemd[1]: sshd@15-10.200.20.44:22-10.200.16.10:50700.service: Deactivated successfully. Jul 2 08:21:52.649005 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:21:52.649887 systemd-logind[1658]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:21:52.651290 systemd-logind[1658]: Removed session 18. Jul 2 08:21:52.726422 systemd[1]: Started sshd@16-10.200.20.44:22-10.200.16.10:50712.service - OpenSSH per-connection server daemon (10.200.16.10:50712). Jul 2 08:21:53.170721 sshd[5867]: Accepted publickey for core from 10.200.16.10 port 50712 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:53.172092 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:53.176002 systemd-logind[1658]: New session 19 of user core. Jul 2 08:21:53.184610 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:21:55.149474 sshd[5867]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:55.154147 systemd[1]: sshd@16-10.200.20.44:22-10.200.16.10:50712.service: Deactivated successfully. Jul 2 08:21:55.158602 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:21:55.160291 systemd-logind[1658]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:21:55.162528 systemd-logind[1658]: Removed session 19. Jul 2 08:21:55.235609 systemd[1]: Started sshd@17-10.200.20.44:22-10.200.16.10:50720.service - OpenSSH per-connection server daemon (10.200.16.10:50720). Jul 2 08:21:55.677341 sshd[5905]: Accepted publickey for core from 10.200.16.10 port 50720 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:55.678816 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:55.683615 systemd-logind[1658]: New session 20 of user core. Jul 2 08:21:55.693543 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:21:56.177053 sshd[5905]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:56.181590 systemd[1]: sshd@17-10.200.20.44:22-10.200.16.10:50720.service: Deactivated successfully. Jul 2 08:21:56.183480 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:21:56.185167 systemd-logind[1658]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:21:56.186860 systemd-logind[1658]: Removed session 20. Jul 2 08:21:56.264754 systemd[1]: Started sshd@18-10.200.20.44:22-10.200.16.10:50722.service - OpenSSH per-connection server daemon (10.200.16.10:50722). Jul 2 08:21:56.702760 sshd[5915]: Accepted publickey for core from 10.200.16.10 port 50722 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:21:56.704228 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:21:56.707989 systemd-logind[1658]: New session 21 of user core. Jul 2 08:21:56.718549 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:21:57.080900 sshd[5915]: pam_unix(sshd:session): session closed for user core Jul 2 08:21:57.085451 systemd[1]: sshd@18-10.200.20.44:22-10.200.16.10:50722.service: Deactivated successfully. Jul 2 08:21:57.088044 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:21:57.089219 systemd-logind[1658]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:21:57.090373 systemd-logind[1658]: Removed session 21. Jul 2 08:22:02.165605 systemd[1]: Started sshd@19-10.200.20.44:22-10.200.16.10:53684.service - OpenSSH per-connection server daemon (10.200.16.10:53684). Jul 2 08:22:02.615705 sshd[5936]: Accepted publickey for core from 10.200.16.10 port 53684 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:22:02.617080 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:22:02.622029 systemd-logind[1658]: New session 22 of user core. Jul 2 08:22:02.625509 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 08:22:03.013620 sshd[5936]: pam_unix(sshd:session): session closed for user core Jul 2 08:22:03.016530 systemd[1]: sshd@19-10.200.20.44:22-10.200.16.10:53684.service: Deactivated successfully. Jul 2 08:22:03.019067 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:22:03.021616 systemd-logind[1658]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:22:03.023409 systemd-logind[1658]: Removed session 22. Jul 2 08:22:08.093588 systemd[1]: Started sshd@20-10.200.20.44:22-10.200.16.10:53694.service - OpenSSH per-connection server daemon (10.200.16.10:53694). Jul 2 08:22:08.503903 sshd[5954]: Accepted publickey for core from 10.200.16.10 port 53694 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:22:08.505358 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:22:08.510099 systemd-logind[1658]: New session 23 of user core. Jul 2 08:22:08.524544 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 08:22:08.863889 sshd[5954]: pam_unix(sshd:session): session closed for user core Jul 2 08:22:08.867644 systemd[1]: sshd@20-10.200.20.44:22-10.200.16.10:53694.service: Deactivated successfully. Jul 2 08:22:08.870192 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:22:08.871698 systemd-logind[1658]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:22:08.873450 systemd-logind[1658]: Removed session 23. Jul 2 08:22:13.941690 systemd[1]: Started sshd@21-10.200.20.44:22-10.200.16.10:59436.service - OpenSSH per-connection server daemon (10.200.16.10:59436). Jul 2 08:22:14.357424 sshd[5991]: Accepted publickey for core from 10.200.16.10 port 59436 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:22:14.358989 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:22:14.362810 systemd-logind[1658]: New session 24 of user core. Jul 2 08:22:14.368475 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 08:22:14.720457 sshd[5991]: pam_unix(sshd:session): session closed for user core Jul 2 08:22:14.724027 systemd[1]: sshd@21-10.200.20.44:22-10.200.16.10:59436.service: Deactivated successfully. Jul 2 08:22:14.726469 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:22:14.727621 systemd-logind[1658]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:22:14.728787 systemd-logind[1658]: Removed session 24. Jul 2 08:22:19.799014 systemd[1]: Started sshd@22-10.200.20.44:22-10.200.16.10:59852.service - OpenSSH per-connection server daemon (10.200.16.10:59852). Jul 2 08:22:20.210532 sshd[6028]: Accepted publickey for core from 10.200.16.10 port 59852 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:22:20.211965 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:22:20.215981 systemd-logind[1658]: New session 25 of user core. Jul 2 08:22:20.222588 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 08:22:20.568232 sshd[6028]: pam_unix(sshd:session): session closed for user core Jul 2 08:22:20.572230 systemd[1]: sshd@22-10.200.20.44:22-10.200.16.10:59852.service: Deactivated successfully. Jul 2 08:22:20.574398 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:22:20.575415 systemd-logind[1658]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:22:20.576301 systemd-logind[1658]: Removed session 25. Jul 2 08:22:25.651643 systemd[1]: Started sshd@23-10.200.20.44:22-10.200.16.10:59864.service - OpenSSH per-connection server daemon (10.200.16.10:59864). Jul 2 08:22:26.091138 sshd[6063]: Accepted publickey for core from 10.200.16.10 port 59864 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:22:26.092384 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:22:26.096911 systemd-logind[1658]: New session 26 of user core. Jul 2 08:22:26.102510 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 08:22:26.470576 sshd[6063]: pam_unix(sshd:session): session closed for user core Jul 2 08:22:26.474244 systemd[1]: sshd@23-10.200.20.44:22-10.200.16.10:59864.service: Deactivated successfully. Jul 2 08:22:26.476071 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:22:26.476842 systemd-logind[1658]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:22:26.477749 systemd-logind[1658]: Removed session 26. Jul 2 08:22:31.550304 systemd[1]: Started sshd@24-10.200.20.44:22-10.200.16.10:59728.service - OpenSSH per-connection server daemon (10.200.16.10:59728). Jul 2 08:22:31.962548 sshd[6081]: Accepted publickey for core from 10.200.16.10 port 59728 ssh2: RSA SHA256:JF+Iq+s3ptCR9+FLNQti8wSNdDAqc44PPbv0a9uBpfQ Jul 2 08:22:31.963956 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:22:31.968537 systemd-logind[1658]: New session 27 of user core. Jul 2 08:22:31.976496 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 08:22:32.335568 sshd[6081]: pam_unix(sshd:session): session closed for user core Jul 2 08:22:32.339446 systemd[1]: sshd@24-10.200.20.44:22-10.200.16.10:59728.service: Deactivated successfully. Jul 2 08:22:32.341609 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 08:22:32.342830 systemd-logind[1658]: Session 27 logged out. Waiting for processes to exit. Jul 2 08:22:32.343824 systemd-logind[1658]: Removed session 27. Jul 2 08:22:46.679488 systemd[1]: cri-containerd-a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916.scope: Deactivated successfully. Jul 2 08:22:46.680118 systemd[1]: cri-containerd-a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916.scope: Consumed 5.258s CPU time. Jul 2 08:22:46.699482 containerd[1694]: time="2024-07-02T08:22:46.699410562Z" level=info msg="shim disconnected" id=a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916 namespace=k8s.io Jul 2 08:22:46.699482 containerd[1694]: time="2024-07-02T08:22:46.699470962Z" level=warning msg="cleaning up after shim disconnected" id=a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916 namespace=k8s.io Jul 2 08:22:46.699482 containerd[1694]: time="2024-07-02T08:22:46.699479602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:22:46.700664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916-rootfs.mount: Deactivated successfully. Jul 2 08:22:46.940503 kubelet[3189]: E0702 08:22:46.940067 3189 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:22:47.135710 kubelet[3189]: I0702 08:22:47.135605 3189 scope.go:117] "RemoveContainer" containerID="a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916" Jul 2 08:22:47.138884 containerd[1694]: time="2024-07-02T08:22:47.138818504Z" level=info msg="CreateContainer within sandbox \"f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 08:22:47.173708 containerd[1694]: time="2024-07-02T08:22:47.173607507Z" level=info msg="CreateContainer within sandbox \"f52868ac0293c4e9f8a2b9e4fbf0ffc348adaa050539e23149400a65df9a2323\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa\"" Jul 2 08:22:47.174351 containerd[1694]: time="2024-07-02T08:22:47.174188908Z" level=info msg="StartContainer for \"f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa\"" Jul 2 08:22:47.210576 systemd[1]: Started cri-containerd-f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa.scope - libcontainer container f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa. Jul 2 08:22:47.238881 containerd[1694]: time="2024-07-02T08:22:47.238818228Z" level=info msg="StartContainer for \"f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa\" returns successfully" Jul 2 08:22:48.230679 systemd[1]: cri-containerd-1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943.scope: Deactivated successfully. Jul 2 08:22:48.231415 systemd[1]: cri-containerd-1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943.scope: Consumed 3.312s CPU time, 22.2M memory peak, 0B memory swap peak. Jul 2 08:22:48.253152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943-rootfs.mount: Deactivated successfully. Jul 2 08:22:48.254177 containerd[1694]: time="2024-07-02T08:22:48.253673279Z" level=info msg="shim disconnected" id=1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943 namespace=k8s.io Jul 2 08:22:48.254177 containerd[1694]: time="2024-07-02T08:22:48.253732719Z" level=warning msg="cleaning up after shim disconnected" id=1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943 namespace=k8s.io Jul 2 08:22:48.254177 containerd[1694]: time="2024-07-02T08:22:48.253742519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:22:48.685195 kubelet[3189]: E0702 08:22:48.685150 3189 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.44:45862->10.200.20.24:2379: read: connection timed out" Jul 2 08:22:48.690861 systemd[1]: cri-containerd-8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888.scope: Deactivated successfully. Jul 2 08:22:48.691268 systemd[1]: cri-containerd-8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888.scope: Consumed 2.646s CPU time, 16.1M memory peak, 0B memory swap peak. Jul 2 08:22:48.712115 containerd[1694]: time="2024-07-02T08:22:48.712045405Z" level=info msg="shim disconnected" id=8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888 namespace=k8s.io Jul 2 08:22:48.712270 containerd[1694]: time="2024-07-02T08:22:48.712106045Z" level=warning msg="cleaning up after shim disconnected" id=8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888 namespace=k8s.io Jul 2 08:22:48.712270 containerd[1694]: time="2024-07-02T08:22:48.712131805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:22:48.713964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888-rootfs.mount: Deactivated successfully. Jul 2 08:22:49.147811 kubelet[3189]: I0702 08:22:49.147695 3189 scope.go:117] "RemoveContainer" containerID="8f6bf64e158208c505bb83c39fd098ca9bcc2a331ec07099468e47a8b7b26888" Jul 2 08:22:49.150085 containerd[1694]: time="2024-07-02T08:22:49.150036145Z" level=info msg="CreateContainer within sandbox \"7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 08:22:49.151428 kubelet[3189]: I0702 08:22:49.151145 3189 scope.go:117] "RemoveContainer" containerID="1a2674a5308f12192154077d73892a541522209b89927e7cbaf6eba1dc60e943" Jul 2 08:22:49.153546 containerd[1694]: time="2024-07-02T08:22:49.153499669Z" level=info msg="CreateContainer within sandbox \"a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 08:22:49.205410 containerd[1694]: time="2024-07-02T08:22:49.205352013Z" level=info msg="CreateContainer within sandbox \"7666bc7b3998919ba31a0e72bb75596403843a59046f4b335ea057c086f9708a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3d5e2b9f2a7e33336d59eb367a77a1f353fdb8475f3617dabd12ef4c4860450a\"" Jul 2 08:22:49.206107 containerd[1694]: time="2024-07-02T08:22:49.205834934Z" level=info msg="StartContainer for \"3d5e2b9f2a7e33336d59eb367a77a1f353fdb8475f3617dabd12ef4c4860450a\"" Jul 2 08:22:49.210121 containerd[1694]: time="2024-07-02T08:22:49.209909379Z" level=info msg="CreateContainer within sandbox \"a4908e6b7fec8943d3abb6488313b2db6b523285e3a0a6e7a63aabcba6ac12f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7ac463fec6d4d9d46649c052b62d9835c64230e2ef16eac813a4f7316489f375\"" Jul 2 08:22:49.211409 containerd[1694]: time="2024-07-02T08:22:49.210761220Z" level=info msg="StartContainer for \"7ac463fec6d4d9d46649c052b62d9835c64230e2ef16eac813a4f7316489f375\"" Jul 2 08:22:49.233842 systemd[1]: Started cri-containerd-3d5e2b9f2a7e33336d59eb367a77a1f353fdb8475f3617dabd12ef4c4860450a.scope - libcontainer container 3d5e2b9f2a7e33336d59eb367a77a1f353fdb8475f3617dabd12ef4c4860450a. Jul 2 08:22:49.242679 systemd[1]: Started cri-containerd-7ac463fec6d4d9d46649c052b62d9835c64230e2ef16eac813a4f7316489f375.scope - libcontainer container 7ac463fec6d4d9d46649c052b62d9835c64230e2ef16eac813a4f7316489f375. Jul 2 08:22:49.298839 containerd[1694]: time="2024-07-02T08:22:49.298793008Z" level=info msg="StartContainer for \"7ac463fec6d4d9d46649c052b62d9835c64230e2ef16eac813a4f7316489f375\" returns successfully" Jul 2 08:22:49.301335 containerd[1694]: time="2024-07-02T08:22:49.298853008Z" level=info msg="StartContainer for \"3d5e2b9f2a7e33336d59eb367a77a1f353fdb8475f3617dabd12ef4c4860450a\" returns successfully" Jul 2 08:22:50.732120 kubelet[3189]: E0702 08:22:50.731717 3189 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.44:45648->10.200.20.24:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3975.1.1-a-7c4c792b73.17de57b91f45de7a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3975.1.1-a-7c4c792b73,UID:ab4ece8e0b5322870d85ae9e25fa1d22,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-7c4c792b73,},FirstTimestamp:2024-07-02 08:22:40.307347066 +0000 UTC m=+196.797018877,LastTimestamp:2024-07-02 08:22:40.307347066 +0000 UTC m=+196.797018877,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-7c4c792b73,}" Jul 2 08:22:57.579618 kubelet[3189]: I0702 08:22:57.579460 3189 status_manager.go:853] "Failed to get status for pod" podUID="ebf83237-ac66-49e7-8154-60ba510fcacf" pod="tigera-operator/tigera-operator-76ff79f7fd-lxmkj" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.44:45768->10.200.20.24:2379: read: connection timed out" Jul 2 08:22:58.685499 kubelet[3189]: E0702 08:22:58.685365 3189 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:22:58.734987 systemd[1]: cri-containerd-f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa.scope: Deactivated successfully. Jul 2 08:22:58.754230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa-rootfs.mount: Deactivated successfully. Jul 2 08:22:58.784083 containerd[1694]: time="2024-07-02T08:22:58.784012421Z" level=info msg="shim disconnected" id=f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa namespace=k8s.io Jul 2 08:22:58.784083 containerd[1694]: time="2024-07-02T08:22:58.784078181Z" level=warning msg="cleaning up after shim disconnected" id=f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa namespace=k8s.io Jul 2 08:22:58.784083 containerd[1694]: time="2024-07-02T08:22:58.784086821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:22:59.175678 kubelet[3189]: I0702 08:22:59.175641 3189 scope.go:117] "RemoveContainer" containerID="a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916" Jul 2 08:22:59.175978 kubelet[3189]: I0702 08:22:59.175949 3189 scope.go:117] "RemoveContainer" containerID="f887c79a2a72761cd5963f668948121d10bb206b6a2d393955d19d2d174ccefa" Jul 2 08:22:59.176358 kubelet[3189]: E0702 08:22:59.176185 3189 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76ff79f7fd-lxmkj_tigera-operator(ebf83237-ac66-49e7-8154-60ba510fcacf)\"" pod="tigera-operator/tigera-operator-76ff79f7fd-lxmkj" podUID="ebf83237-ac66-49e7-8154-60ba510fcacf" Jul 2 08:22:59.177992 containerd[1694]: time="2024-07-02T08:22:59.177824562Z" level=info msg="RemoveContainer for \"a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916\"" Jul 2 08:22:59.188227 containerd[1694]: time="2024-07-02T08:22:59.188160570Z" level=info msg="RemoveContainer for \"a0e6dd89595dcb1cf712245f20aaad4de540f6471851e77192a2e76420eb2916\" returns successfully" Jul 2 08:23:08.256352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.274163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.290419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.307332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.323019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.338386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.338684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.363692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.364034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.364150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.380197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.388344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.388598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.404063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.412875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.413122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.428743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.428996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.452592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.452940 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.453064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.469065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.477328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.477565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.501454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.501824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.510071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.518724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.527190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.543733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.544123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.553067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.553371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.569702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.570069 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.586059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.586616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.602796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.603094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.627589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.628593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.629116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.644104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.652988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.653120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.668646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.677264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.677667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.688288 kubelet[3189]: E0702 08:23:08.687902 3189 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-7c4c792b73?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:23:08.694373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.702595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.702847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.718857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.719234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.743285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.743700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.743838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.759678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.760168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.775835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.776270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.792032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.792535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.812283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.812588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.831920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.832244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.851190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.851564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.869513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.869826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.888000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.907171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.907627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.907767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.926376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.926721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.945855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.955878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.969730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.979606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.989466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:08.989703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.006575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.006847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.021967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.038734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.039105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.039334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.054935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:23:09.055391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001