Jul 2 01:49:15.037146 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 01:49:15.037165 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 01:49:15.037173 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 01:49:15.037180 kernel: printk: bootconsole [pl11] enabled Jul 2 01:49:15.037185 kernel: efi: EFI v2.70 by EDK II Jul 2 01:49:15.037191 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37b33f98 Jul 2 01:49:15.037197 kernel: random: crng init done Jul 2 01:49:15.037203 kernel: ACPI: Early table checksum verification disabled Jul 2 01:49:15.037208 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 01:49:15.037213 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037219 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037225 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 01:49:15.037230 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037236 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037242 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037248 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037254 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037261 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037266 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 01:49:15.037272 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.037278 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 01:49:15.037283 kernel: NUMA: Failed to initialise from firmware Jul 2 01:49:15.037289 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 01:49:15.037295 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Jul 2 01:49:15.037300 kernel: Zone ranges: Jul 2 01:49:15.037306 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 01:49:15.037311 kernel: DMA32 empty Jul 2 01:49:15.037318 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 01:49:15.037324 kernel: Movable zone start for each node Jul 2 01:49:15.037330 kernel: Early memory node ranges Jul 2 01:49:15.037335 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 01:49:15.037341 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 01:49:15.037347 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 01:49:15.037352 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 01:49:15.037358 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 01:49:15.037364 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 01:49:15.037369 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 01:49:15.037375 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 01:49:15.037380 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 01:49:15.037387 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 01:49:15.037396 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 01:49:15.037402 kernel: psci: probing for conduit method from ACPI. Jul 2 01:49:15.037408 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 01:49:15.037413 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 01:49:15.037420 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 01:49:15.037426 kernel: psci: SMC Calling Convention v1.4 Jul 2 01:49:15.037432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Jul 2 01:49:15.037438 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Jul 2 01:49:15.037444 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 01:49:15.037450 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 01:49:15.037456 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 01:49:15.037462 kernel: Detected PIPT I-cache on CPU0 Jul 2 01:49:15.037469 kernel: CPU features: detected: GIC system register CPU interface Jul 2 01:49:15.037475 kernel: CPU features: detected: Hardware dirty bit management Jul 2 01:49:15.037481 kernel: CPU features: detected: Spectre-BHB Jul 2 01:49:15.037486 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 01:49:15.037494 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 01:49:15.037500 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 01:49:15.037506 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 01:49:15.037512 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 01:49:15.037518 kernel: Policy zone: Normal Jul 2 01:49:15.037526 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 01:49:15.037532 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 01:49:15.037538 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 01:49:15.037544 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 01:49:15.037550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 01:49:15.037558 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Jul 2 01:49:15.037573 kernel: Memory: 3990264K/4194160K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 203896K reserved, 0K cma-reserved) Jul 2 01:49:15.037579 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 01:49:15.037585 kernel: trace event string verifier disabled Jul 2 01:49:15.037591 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 01:49:15.037598 kernel: rcu: RCU event tracing is enabled. Jul 2 01:49:15.037604 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 01:49:15.037610 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 01:49:15.037616 kernel: Tracing variant of Tasks RCU enabled. Jul 2 01:49:15.037622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 01:49:15.037629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 01:49:15.037636 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 01:49:15.037642 kernel: GICv3: 960 SPIs implemented Jul 2 01:49:15.037648 kernel: GICv3: 0 Extended SPIs implemented Jul 2 01:49:15.037654 kernel: GICv3: Distributor has no Range Selector support Jul 2 01:49:15.037660 kernel: Root IRQ handler: gic_handle_irq Jul 2 01:49:15.037666 kernel: GICv3: 16 PPIs implemented Jul 2 01:49:15.037672 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 01:49:15.037678 kernel: ITS: No ITS available, not enabling LPIs Jul 2 01:49:15.037684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 01:49:15.037690 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 01:49:15.037696 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 01:49:15.037702 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 01:49:15.037709 kernel: Console: colour dummy device 80x25 Jul 2 01:49:15.037716 kernel: printk: console [tty1] enabled Jul 2 01:49:15.037722 kernel: ACPI: Core revision 20210730 Jul 2 01:49:15.037729 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 01:49:15.037735 kernel: pid_max: default: 32768 minimum: 301 Jul 2 01:49:15.037741 kernel: LSM: Security Framework initializing Jul 2 01:49:15.037747 kernel: SELinux: Initializing. Jul 2 01:49:15.037753 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.037760 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.037767 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 01:49:15.037773 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 01:49:15.037779 kernel: rcu: Hierarchical SRCU implementation. Jul 2 01:49:15.037785 kernel: Remapping and enabling EFI services. Jul 2 01:49:15.037791 kernel: smp: Bringing up secondary CPUs ... Jul 2 01:49:15.037797 kernel: Detected PIPT I-cache on CPU1 Jul 2 01:49:15.037804 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 01:49:15.037810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 01:49:15.037816 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 01:49:15.037823 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 01:49:15.037829 kernel: SMP: Total of 2 processors activated. Jul 2 01:49:15.037836 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 01:49:15.037842 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 01:49:15.037849 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 01:49:15.037855 kernel: CPU features: detected: CRC32 instructions Jul 2 01:49:15.037861 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 01:49:15.037867 kernel: CPU features: detected: LSE atomic instructions Jul 2 01:49:15.037874 kernel: CPU features: detected: Privileged Access Never Jul 2 01:49:15.037881 kernel: CPU: All CPU(s) started at EL1 Jul 2 01:49:15.037888 kernel: alternatives: patching kernel code Jul 2 01:49:15.037898 kernel: devtmpfs: initialized Jul 2 01:49:15.037906 kernel: KASLR enabled Jul 2 01:49:15.037912 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 01:49:15.037919 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 01:49:15.037925 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 01:49:15.037932 kernel: SMBIOS 3.1.0 present. Jul 2 01:49:15.037938 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 01:49:15.037945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 01:49:15.037953 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 01:49:15.037960 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 01:49:15.037966 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 01:49:15.037973 kernel: audit: initializing netlink subsys (disabled) Jul 2 01:49:15.037979 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 Jul 2 01:49:15.037986 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 01:49:15.037992 kernel: cpuidle: using governor menu Jul 2 01:49:15.038000 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 01:49:15.038006 kernel: ASID allocator initialised with 32768 entries Jul 2 01:49:15.038013 kernel: ACPI: bus type PCI registered Jul 2 01:49:15.038019 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 01:49:15.038026 kernel: Serial: AMBA PL011 UART driver Jul 2 01:49:15.038033 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 01:49:15.038039 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 01:49:15.038046 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 01:49:15.038052 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 01:49:15.038060 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 01:49:15.038066 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 01:49:15.038073 kernel: ACPI: Added _OSI(Module Device) Jul 2 01:49:15.038079 kernel: ACPI: Added _OSI(Processor Device) Jul 2 01:49:15.038086 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 01:49:15.038092 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 01:49:15.038099 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 01:49:15.038105 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 01:49:15.038112 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 01:49:15.038120 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 01:49:15.038126 kernel: ACPI: Interpreter enabled Jul 2 01:49:15.038132 kernel: ACPI: Using GIC for interrupt routing Jul 2 01:49:15.038139 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 01:49:15.038146 kernel: printk: console [ttyAMA0] enabled Jul 2 01:49:15.038152 kernel: printk: bootconsole [pl11] disabled Jul 2 01:49:15.038159 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 01:49:15.038166 kernel: iommu: Default domain type: Translated Jul 2 01:49:15.038172 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 01:49:15.038180 kernel: vgaarb: loaded Jul 2 01:49:15.038186 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 01:49:15.038193 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 01:49:15.038199 kernel: PTP clock support registered Jul 2 01:49:15.038206 kernel: Registered efivars operations Jul 2 01:49:15.038212 kernel: No ACPI PMU IRQ for CPU0 Jul 2 01:49:15.038219 kernel: No ACPI PMU IRQ for CPU1 Jul 2 01:49:15.038225 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 01:49:15.038232 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 01:49:15.038240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 01:49:15.038246 kernel: pnp: PnP ACPI init Jul 2 01:49:15.038252 kernel: pnp: PnP ACPI: found 0 devices Jul 2 01:49:15.038259 kernel: NET: Registered PF_INET protocol family Jul 2 01:49:15.038266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 01:49:15.038272 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 01:49:15.038279 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 01:49:15.038286 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 01:49:15.038292 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 01:49:15.038300 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 01:49:15.038307 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.038314 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.038320 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 01:49:15.038327 kernel: PCI: CLS 0 bytes, default 64 Jul 2 01:49:15.038333 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 01:49:15.038340 kernel: kvm [1]: HYP mode not available Jul 2 01:49:15.038346 kernel: Initialise system trusted keyrings Jul 2 01:49:15.038352 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 01:49:15.038360 kernel: Key type asymmetric registered Jul 2 01:49:15.038367 kernel: Asymmetric key parser 'x509' registered Jul 2 01:49:15.038373 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 01:49:15.038380 kernel: io scheduler mq-deadline registered Jul 2 01:49:15.038386 kernel: io scheduler kyber registered Jul 2 01:49:15.038393 kernel: io scheduler bfq registered Jul 2 01:49:15.038400 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 01:49:15.038406 kernel: thunder_xcv, ver 1.0 Jul 2 01:49:15.038412 kernel: thunder_bgx, ver 1.0 Jul 2 01:49:15.038420 kernel: nicpf, ver 1.0 Jul 2 01:49:15.038426 kernel: nicvf, ver 1.0 Jul 2 01:49:15.038537 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 01:49:15.041745 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T01:49:14 UTC (1719884954) Jul 2 01:49:15.041777 kernel: efifb: probing for efifb Jul 2 01:49:15.041785 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 01:49:15.041792 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 01:49:15.041799 kernel: efifb: scrolling: redraw Jul 2 01:49:15.041811 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 01:49:15.041818 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 01:49:15.041825 kernel: fb0: EFI VGA frame buffer device Jul 2 01:49:15.041832 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 01:49:15.041838 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 01:49:15.041845 kernel: NET: Registered PF_INET6 protocol family Jul 2 01:49:15.041852 kernel: Segment Routing with IPv6 Jul 2 01:49:15.041858 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 01:49:15.041865 kernel: NET: Registered PF_PACKET protocol family Jul 2 01:49:15.041874 kernel: Key type dns_resolver registered Jul 2 01:49:15.041881 kernel: registered taskstats version 1 Jul 2 01:49:15.041887 kernel: Loading compiled-in X.509 certificates Jul 2 01:49:15.041894 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 01:49:15.041901 kernel: Key type .fscrypt registered Jul 2 01:49:15.041907 kernel: Key type fscrypt-provisioning registered Jul 2 01:49:15.041914 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 01:49:15.041921 kernel: ima: Allocated hash algorithm: sha1 Jul 2 01:49:15.041927 kernel: ima: No architecture policies found Jul 2 01:49:15.041935 kernel: clk: Disabling unused clocks Jul 2 01:49:15.041942 kernel: Freeing unused kernel memory: 36352K Jul 2 01:49:15.041949 kernel: Run /init as init process Jul 2 01:49:15.041956 kernel: with arguments: Jul 2 01:49:15.041963 kernel: /init Jul 2 01:49:15.041969 kernel: with environment: Jul 2 01:49:15.041976 kernel: HOME=/ Jul 2 01:49:15.041982 kernel: TERM=linux Jul 2 01:49:15.041989 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 01:49:15.041999 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 01:49:15.042008 systemd[1]: Detected virtualization microsoft. Jul 2 01:49:15.042016 systemd[1]: Detected architecture arm64. Jul 2 01:49:15.042023 systemd[1]: Running in initrd. Jul 2 01:49:15.042030 systemd[1]: No hostname configured, using default hostname. Jul 2 01:49:15.042037 systemd[1]: Hostname set to . Jul 2 01:49:15.042045 systemd[1]: Initializing machine ID from random generator. Jul 2 01:49:15.042053 systemd[1]: Queued start job for default target initrd.target. Jul 2 01:49:15.042061 systemd[1]: Started systemd-ask-password-console.path. Jul 2 01:49:15.042067 systemd[1]: Reached target cryptsetup.target. Jul 2 01:49:15.042075 systemd[1]: Reached target paths.target. Jul 2 01:49:15.042081 systemd[1]: Reached target slices.target. Jul 2 01:49:15.042089 systemd[1]: Reached target swap.target. Jul 2 01:49:15.042096 systemd[1]: Reached target timers.target. Jul 2 01:49:15.042104 systemd[1]: Listening on iscsid.socket. Jul 2 01:49:15.042112 systemd[1]: Listening on iscsiuio.socket. Jul 2 01:49:15.042119 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 01:49:15.042127 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 01:49:15.042135 systemd[1]: Listening on systemd-journald.socket. Jul 2 01:49:15.042142 systemd[1]: Listening on systemd-networkd.socket. Jul 2 01:49:15.042149 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 01:49:15.042156 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 01:49:15.042163 systemd[1]: Reached target sockets.target. Jul 2 01:49:15.042170 systemd[1]: Starting kmod-static-nodes.service... Jul 2 01:49:15.042179 systemd[1]: Finished network-cleanup.service. Jul 2 01:49:15.042186 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 01:49:15.042194 systemd[1]: Starting systemd-journald.service... Jul 2 01:49:15.042201 systemd[1]: Starting systemd-modules-load.service... Jul 2 01:49:15.042208 systemd[1]: Starting systemd-resolved.service... Jul 2 01:49:15.042215 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 01:49:15.042228 systemd-journald[276]: Journal started Jul 2 01:49:15.042275 systemd-journald[276]: Runtime Journal (/run/log/journal/e0c89e9c89f04daa91b584f1a427f214) is 8.0M, max 78.6M, 70.6M free. Jul 2 01:49:15.032405 systemd-modules-load[277]: Inserted module 'overlay' Jul 2 01:49:15.076577 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 01:49:15.076635 systemd[1]: Started systemd-journald.service. Jul 2 01:49:15.082969 systemd-resolved[278]: Positive Trust Anchors: Jul 2 01:49:15.088538 kernel: Bridge firewalling registered Jul 2 01:49:15.082985 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 01:49:15.126291 kernel: audit: type=1130 audit(1719884955.100:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.083012 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 01:49:15.200681 kernel: audit: type=1130 audit(1719884955.130:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.200705 kernel: SCSI subsystem initialized Jul 2 01:49:15.200714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 01:49:15.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.085186 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 2 01:49:15.235743 kernel: audit: type=1130 audit(1719884955.204:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.235765 kernel: device-mapper: uevent: version 1.0.3 Jul 2 01:49:15.235773 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 01:49:15.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.094375 systemd-modules-load[277]: Inserted module 'br_netfilter' Jul 2 01:49:15.262017 kernel: audit: type=1130 audit(1719884955.239:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.126305 systemd[1]: Started systemd-resolved.service. Jul 2 01:49:15.131231 systemd[1]: Finished kmod-static-nodes.service. Jul 2 01:49:15.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.224453 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 01:49:15.314276 kernel: audit: type=1130 audit(1719884955.266:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.240231 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 01:49:15.267192 systemd[1]: Reached target nss-lookup.target. Jul 2 01:49:15.369997 kernel: audit: type=1130 audit(1719884955.324:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.370025 kernel: audit: type=1130 audit(1719884955.347:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.279765 systemd-modules-load[277]: Inserted module 'dm_multipath' Jul 2 01:49:15.295865 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 01:49:15.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.303048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 01:49:15.447056 kernel: audit: type=1130 audit(1719884955.391:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.447082 kernel: audit: type=1130 audit(1719884955.424:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.313539 systemd[1]: Finished systemd-modules-load.service. Jul 2 01:49:15.325421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 01:49:15.349376 systemd[1]: Starting systemd-sysctl.service... Jul 2 01:49:15.385821 systemd[1]: Finished systemd-sysctl.service. Jul 2 01:49:15.391721 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 01:49:15.453455 systemd[1]: Starting dracut-cmdline.service... Jul 2 01:49:15.483007 dracut-cmdline[299]: dracut-dracut-053 Jul 2 01:49:15.483007 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 01:49:15.583593 kernel: Loading iSCSI transport class v2.0-870. Jul 2 01:49:15.599598 kernel: iscsi: registered transport (tcp) Jul 2 01:49:15.620610 kernel: iscsi: registered transport (qla4xxx) Jul 2 01:49:15.620667 kernel: QLogic iSCSI HBA Driver Jul 2 01:49:15.655863 systemd[1]: Finished dracut-cmdline.service. Jul 2 01:49:15.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:15.661364 systemd[1]: Starting dracut-pre-udev.service... Jul 2 01:49:15.713582 kernel: raid6: neonx8 gen() 13829 MB/s Jul 2 01:49:15.733572 kernel: raid6: neonx8 xor() 10840 MB/s Jul 2 01:49:15.753573 kernel: raid6: neonx4 gen() 13549 MB/s Jul 2 01:49:15.774575 kernel: raid6: neonx4 xor() 11307 MB/s Jul 2 01:49:15.794570 kernel: raid6: neonx2 gen() 12965 MB/s Jul 2 01:49:15.814573 kernel: raid6: neonx2 xor() 10371 MB/s Jul 2 01:49:15.835571 kernel: raid6: neonx1 gen() 10485 MB/s Jul 2 01:49:15.855569 kernel: raid6: neonx1 xor() 8791 MB/s Jul 2 01:49:15.875569 kernel: raid6: int64x8 gen() 6275 MB/s Jul 2 01:49:15.896571 kernel: raid6: int64x8 xor() 3541 MB/s Jul 2 01:49:15.916570 kernel: raid6: int64x4 gen() 7233 MB/s Jul 2 01:49:15.936572 kernel: raid6: int64x4 xor() 3860 MB/s Jul 2 01:49:15.957569 kernel: raid6: int64x2 gen() 6155 MB/s Jul 2 01:49:15.977573 kernel: raid6: int64x2 xor() 3324 MB/s Jul 2 01:49:15.997570 kernel: raid6: int64x1 gen() 5046 MB/s Jul 2 01:49:16.022965 kernel: raid6: int64x1 xor() 2645 MB/s Jul 2 01:49:16.022985 kernel: raid6: using algorithm neonx8 gen() 13829 MB/s Jul 2 01:49:16.023001 kernel: raid6: .... xor() 10840 MB/s, rmw enabled Jul 2 01:49:16.027332 kernel: raid6: using neon recovery algorithm Jul 2 01:49:16.044572 kernel: xor: measuring software checksum speed Jul 2 01:49:16.048569 kernel: 8regs : 17322 MB/sec Jul 2 01:49:16.056823 kernel: 32regs : 20760 MB/sec Jul 2 01:49:16.056834 kernel: arm64_neon : 27911 MB/sec Jul 2 01:49:16.056844 kernel: xor: using function: arm64_neon (27911 MB/sec) Jul 2 01:49:16.117575 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 01:49:16.127510 systemd[1]: Finished dracut-pre-udev.service. Jul 2 01:49:16.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.136000 audit: BPF prog-id=7 op=LOAD Jul 2 01:49:16.136000 audit: BPF prog-id=8 op=LOAD Jul 2 01:49:16.136998 systemd[1]: Starting systemd-udevd.service... Jul 2 01:49:16.154805 systemd-udevd[476]: Using default interface naming scheme 'v252'. Jul 2 01:49:16.161153 systemd[1]: Started systemd-udevd.service. Jul 2 01:49:16.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.171621 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 01:49:16.188373 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Jul 2 01:49:16.218202 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 01:49:16.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.223962 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 01:49:16.263439 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 01:49:16.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.316582 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 01:49:16.328045 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 01:49:16.328097 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 01:49:16.328108 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 01:49:16.356938 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 01:49:16.357007 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 01:49:16.362184 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 01:49:16.370657 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 01:49:16.378481 kernel: scsi host1: storvsc_host_t Jul 2 01:49:16.378575 kernel: scsi host0: storvsc_host_t Jul 2 01:49:16.378710 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 01:49:16.391734 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 01:49:16.409150 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 01:49:16.409370 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 01:49:16.410586 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 01:49:16.427483 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 01:49:16.427725 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 01:49:16.432580 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 01:49:16.432785 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 01:49:16.439585 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 01:49:16.454031 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:16.454077 kernel: hv_netvsc 0022487c-daa9-0022-487c-daa90022487c eth0: VF slot 1 added Jul 2 01:49:16.454201 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 01:49:16.471586 kernel: hv_vmbus: registering driver hv_pci Jul 2 01:49:16.482885 kernel: hv_pci 7f9d7b6f-2e31-4422-86f3-3cbcec1acb9e: PCI VMBus probing: Using version 0x10004 Jul 2 01:49:16.483102 kernel: hv_pci 7f9d7b6f-2e31-4422-86f3-3cbcec1acb9e: PCI host bridge to bus 2e31:00 Jul 2 01:49:16.495555 kernel: pci_bus 2e31:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 01:49:16.495769 kernel: pci_bus 2e31:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 01:49:16.508614 kernel: pci 2e31:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 01:49:16.521482 kernel: pci 2e31:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 01:49:16.544128 kernel: pci 2e31:00:02.0: enabling Extended Tags Jul 2 01:49:16.561627 kernel: pci 2e31:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2e31:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 01:49:16.574170 kernel: pci_bus 2e31:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 01:49:16.574380 kernel: pci 2e31:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 01:49:16.614581 kernel: mlx5_core 2e31:00:02.0: firmware version: 16.30.1284 Jul 2 01:49:16.770583 kernel: mlx5_core 2e31:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jul 2 01:49:16.830719 kernel: hv_netvsc 0022487c-daa9-0022-487c-daa90022487c eth0: VF registering: eth1 Jul 2 01:49:16.830907 kernel: mlx5_core 2e31:00:02.0 eth1: joined to eth0 Jul 2 01:49:16.842584 kernel: mlx5_core 2e31:00:02.0 enP11825s1: renamed from eth1 Jul 2 01:49:16.895919 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 01:49:16.952582 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (534) Jul 2 01:49:16.966634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 01:49:17.097378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 01:49:17.113081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 01:49:17.125202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 01:49:17.133457 systemd[1]: Starting disk-uuid.service... Jul 2 01:49:17.159592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:17.167588 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:18.176580 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:18.177414 disk-uuid[602]: The operation has completed successfully. Jul 2 01:49:18.260066 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 01:49:18.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.260160 systemd[1]: Finished disk-uuid.service. Jul 2 01:49:18.265375 systemd[1]: Starting verity-setup.service... Jul 2 01:49:18.310602 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 01:49:18.484834 systemd[1]: Found device dev-mapper-usr.device. Jul 2 01:49:18.490487 systemd[1]: Mounting sysusr-usr.mount... Jul 2 01:49:18.500030 systemd[1]: Finished verity-setup.service. Jul 2 01:49:18.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.554587 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 01:49:18.554983 systemd[1]: Mounted sysusr-usr.mount. Jul 2 01:49:18.558984 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 01:49:18.559844 systemd[1]: Starting ignition-setup.service... Jul 2 01:49:18.567041 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 01:49:18.607015 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 01:49:18.607079 kernel: BTRFS info (device sda6): using free space tree Jul 2 01:49:18.613410 kernel: BTRFS info (device sda6): has skinny extents Jul 2 01:49:18.637857 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 01:49:18.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.647000 audit: BPF prog-id=9 op=LOAD Jul 2 01:49:18.648505 systemd[1]: Starting systemd-networkd.service... Jul 2 01:49:18.673087 systemd-networkd[843]: lo: Link UP Jul 2 01:49:18.674600 systemd-networkd[843]: lo: Gained carrier Jul 2 01:49:18.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.675109 systemd-networkd[843]: Enumeration completed Jul 2 01:49:18.676736 systemd[1]: Started systemd-networkd.service. Jul 2 01:49:18.681762 systemd-networkd[843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 01:49:18.682675 systemd[1]: Reached target network.target. Jul 2 01:49:18.695052 systemd[1]: Starting iscsiuio.service... Jul 2 01:49:18.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.714880 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 01:49:18.731456 iscsid[855]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 01:49:18.731456 iscsid[855]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 01:49:18.731456 iscsid[855]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 01:49:18.731456 iscsid[855]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 01:49:18.731456 iscsid[855]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 01:49:18.731456 iscsid[855]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 01:49:18.731456 iscsid[855]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 01:49:18.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.715263 systemd[1]: Started iscsiuio.service. Jul 2 01:49:18.725945 systemd[1]: Starting iscsid.service... Jul 2 01:49:18.742460 systemd[1]: Started iscsid.service. Jul 2 01:49:18.747593 systemd[1]: Starting dracut-initqueue.service... Jul 2 01:49:18.783501 systemd[1]: Finished dracut-initqueue.service. Jul 2 01:49:18.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.788325 systemd[1]: Reached target remote-fs-pre.target. Jul 2 01:49:18.798116 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 01:49:18.809569 systemd[1]: Reached target remote-fs.target. Jul 2 01:49:18.828750 systemd[1]: Starting dracut-pre-mount.service... Jul 2 01:49:18.846921 systemd[1]: Finished dracut-pre-mount.service. Jul 2 01:49:18.886379 kernel: mlx5_core 2e31:00:02.0 enP11825s1: Link up Jul 2 01:49:18.889340 systemd[1]: Finished ignition-setup.service. Jul 2 01:49:18.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:18.895023 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 01:49:18.932555 kernel: hv_netvsc 0022487c-daa9-0022-487c-daa90022487c eth0: Data path switched to VF: enP11825s1 Jul 2 01:49:18.932884 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 01:49:18.932808 systemd-networkd[843]: enP11825s1: Link UP Jul 2 01:49:18.932915 systemd-networkd[843]: eth0: Link UP Jul 2 01:49:18.933042 systemd-networkd[843]: eth0: Gained carrier Jul 2 01:49:18.940829 systemd-networkd[843]: enP11825s1: Gained carrier Jul 2 01:49:18.963634 systemd-networkd[843]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 01:49:20.493698 systemd-networkd[843]: eth0: Gained IPv6LL Jul 2 01:49:21.398840 ignition[870]: Ignition 2.14.0 Jul 2 01:49:21.398853 ignition[870]: Stage: fetch-offline Jul 2 01:49:21.398914 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:21.398940 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:21.439912 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:21.440061 ignition[870]: parsed url from cmdline: "" Jul 2 01:49:21.440065 ignition[870]: no config URL provided Jul 2 01:49:21.440073 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 01:49:21.483750 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 01:49:21.483781 kernel: audit: type=1130 audit(1719884961.455:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.447130 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 01:49:21.440081 ignition[870]: no config at "/usr/lib/ignition/user.ign" Jul 2 01:49:21.457426 systemd[1]: Starting ignition-fetch.service... Jul 2 01:49:21.440086 ignition[870]: failed to fetch config: resource requires networking Jul 2 01:49:21.440418 ignition[870]: Ignition finished successfully Jul 2 01:49:21.487697 ignition[876]: Ignition 2.14.0 Jul 2 01:49:21.487703 ignition[876]: Stage: fetch Jul 2 01:49:21.487816 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:21.487834 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:21.491612 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:21.491787 ignition[876]: parsed url from cmdline: "" Jul 2 01:49:21.491791 ignition[876]: no config URL provided Jul 2 01:49:21.491796 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 01:49:21.491805 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jul 2 01:49:21.491840 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 01:49:21.592521 ignition[876]: GET result: OK Jul 2 01:49:21.592637 ignition[876]: config has been read from IMDS userdata Jul 2 01:49:21.592678 ignition[876]: parsing config with SHA512: abb2a6dafacf250fa903c48594514ca48a61636f90b352d4c390a6a5abfbb6e1e952bdb41ebcd0977d073fdeca42034f07da85f48955d14dc194bbdbefeac76d Jul 2 01:49:21.596325 unknown[876]: fetched base config from "system" Jul 2 01:49:21.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.596937 ignition[876]: fetch: fetch complete Jul 2 01:49:21.596336 unknown[876]: fetched base config from "system" Jul 2 01:49:21.637072 kernel: audit: type=1130 audit(1719884961.604:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.596942 ignition[876]: fetch: fetch passed Jul 2 01:49:21.596342 unknown[876]: fetched user config from "azure" Jul 2 01:49:21.596988 ignition[876]: Ignition finished successfully Jul 2 01:49:21.598142 systemd[1]: Finished ignition-fetch.service. Jul 2 01:49:21.647170 ignition[883]: Ignition 2.14.0 Jul 2 01:49:21.605557 systemd[1]: Starting ignition-kargs.service... Jul 2 01:49:21.647177 ignition[883]: Stage: kargs Jul 2 01:49:21.669279 systemd[1]: Finished ignition-kargs.service. Jul 2 01:49:21.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.647297 ignition[883]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:21.708683 kernel: audit: type=1130 audit(1719884961.676:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.677651 systemd[1]: Starting ignition-disks.service... Jul 2 01:49:21.741204 kernel: audit: type=1130 audit(1719884961.712:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.647322 ignition[883]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:21.708698 systemd[1]: Finished ignition-disks.service. Jul 2 01:49:21.663275 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:21.713184 systemd[1]: Reached target initrd-root-device.target. Jul 2 01:49:21.664485 ignition[883]: kargs: kargs passed Jul 2 01:49:21.736892 systemd[1]: Reached target local-fs-pre.target. Jul 2 01:49:21.664549 ignition[883]: Ignition finished successfully Jul 2 01:49:21.742647 systemd[1]: Reached target local-fs.target. Jul 2 01:49:21.687605 ignition[889]: Ignition 2.14.0 Jul 2 01:49:21.749780 systemd[1]: Reached target sysinit.target. Jul 2 01:49:21.687612 ignition[889]: Stage: disks Jul 2 01:49:21.759465 systemd[1]: Reached target basic.target. Jul 2 01:49:21.687724 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:21.771134 systemd[1]: Starting systemd-fsck-root.service... Jul 2 01:49:21.687742 ignition[889]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:21.690478 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:21.702593 ignition[889]: disks: disks passed Jul 2 01:49:21.702665 ignition[889]: Ignition finished successfully Jul 2 01:49:21.870841 systemd-fsck[897]: ROOT: clean, 614/7326000 files, 481075/7359488 blocks Jul 2 01:49:21.879008 systemd[1]: Finished systemd-fsck-root.service. Jul 2 01:49:21.905715 kernel: audit: type=1130 audit(1719884961.883:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:21.884504 systemd[1]: Mounting sysroot.mount... Jul 2 01:49:21.931588 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 01:49:21.931052 systemd[1]: Mounted sysroot.mount. Jul 2 01:49:21.935058 systemd[1]: Reached target initrd-root-fs.target. Jul 2 01:49:21.998836 systemd[1]: Mounting sysroot-usr.mount... Jul 2 01:49:22.003788 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 01:49:22.011136 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 01:49:22.011174 systemd[1]: Reached target ignition-diskful.target. Jul 2 01:49:22.022823 systemd[1]: Mounted sysroot-usr.mount. Jul 2 01:49:22.082144 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 01:49:22.087489 systemd[1]: Starting initrd-setup-root.service... Jul 2 01:49:22.112600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Jul 2 01:49:22.119650 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 01:49:22.136135 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 01:49:22.136790 kernel: BTRFS info (device sda6): using free space tree Jul 2 01:49:22.136807 kernel: BTRFS info (device sda6): has skinny extents Jul 2 01:49:22.139496 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 01:49:22.153000 initrd-setup-root[938]: cut: /sysroot/etc/group: No such file or directory Jul 2 01:49:22.176982 initrd-setup-root[946]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 01:49:22.186588 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 01:49:22.703411 systemd[1]: Finished initrd-setup-root.service. Jul 2 01:49:22.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.728431 systemd[1]: Starting ignition-mount.service... Jul 2 01:49:22.738812 kernel: audit: type=1130 audit(1719884962.707:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.739077 systemd[1]: Starting sysroot-boot.service... Jul 2 01:49:22.744012 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 01:49:22.744115 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 01:49:22.777588 ignition[974]: INFO : Ignition 2.14.0 Jul 2 01:49:22.777588 ignition[974]: INFO : Stage: mount Jul 2 01:49:22.793428 ignition[974]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:22.793428 ignition[974]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:22.793428 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:22.793428 ignition[974]: INFO : mount: mount passed Jul 2 01:49:22.793428 ignition[974]: INFO : Ignition finished successfully Jul 2 01:49:22.869180 kernel: audit: type=1130 audit(1719884962.793:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.869205 kernel: audit: type=1130 audit(1719884962.824:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.789025 systemd[1]: Finished sysroot-boot.service. Jul 2 01:49:22.818344 systemd[1]: Finished ignition-mount.service. Jul 2 01:49:23.354355 coreos-metadata[906]: Jul 02 01:49:23.354 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 01:49:23.364576 coreos-metadata[906]: Jul 02 01:49:23.364 INFO Fetch successful Jul 2 01:49:23.399796 coreos-metadata[906]: Jul 02 01:49:23.399 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 01:49:23.413943 coreos-metadata[906]: Jul 02 01:49:23.413 INFO Fetch successful Jul 2 01:49:23.430544 coreos-metadata[906]: Jul 02 01:49:23.430 INFO wrote hostname ci-3510.3.5-a-637f296955 to /sysroot/etc/hostname Jul 2 01:49:23.439871 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 01:49:23.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.445820 systemd[1]: Starting ignition-files.service... Jul 2 01:49:23.474792 kernel: audit: type=1130 audit(1719884963.444:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.473859 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 01:49:23.499257 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (985) Jul 2 01:49:23.499293 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 01:49:23.499304 kernel: BTRFS info (device sda6): using free space tree Jul 2 01:49:23.508705 kernel: BTRFS info (device sda6): has skinny extents Jul 2 01:49:23.513504 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 01:49:23.527254 ignition[1004]: INFO : Ignition 2.14.0 Jul 2 01:49:23.527254 ignition[1004]: INFO : Stage: files Jul 2 01:49:23.537473 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:23.537473 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:23.537473 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:23.537473 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Jul 2 01:49:23.569993 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 01:49:23.569993 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 01:49:23.614317 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 01:49:23.621936 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 01:49:23.629228 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 01:49:23.629228 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 01:49:23.629228 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 01:49:23.629228 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 01:49:23.629228 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 01:49:23.622004 unknown[1004]: wrote ssh authorized keys file for user: core Jul 2 01:49:23.998712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 01:49:24.220122 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 01:49:24.230988 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 01:49:24.230988 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 01:49:24.677645 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 01:49:24.750984 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:24.761765 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 01:49:24.916619 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1006) Jul 2 01:49:24.808870 systemd[1]: mnt-oem100843594.mount: Deactivated successfully. Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem100843594" Jul 2 01:49:24.922682 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem100843594": device or resource busy Jul 2 01:49:24.922682 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem100843594", trying btrfs: device or resource busy Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem100843594" Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem100843594" Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem100843594" Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem100843594" Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 01:49:24.922682 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3069783709" Jul 2 01:49:24.922682 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3069783709": device or resource busy Jul 2 01:49:24.922682 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3069783709", trying btrfs: device or resource busy Jul 2 01:49:24.833875 systemd[1]: mnt-oem3069783709.mount: Deactivated successfully. Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3069783709" Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3069783709" Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3069783709" Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3069783709" Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:25.087835 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 01:49:25.207054 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK Jul 2 01:49:25.410551 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:25.410551 ignition[1004]: INFO : files: op(15): [started] processing unit "nvidia.service" Jul 2 01:49:25.410551 ignition[1004]: INFO : files: op(15): [finished] processing unit "nvidia.service" Jul 2 01:49:25.410551 ignition[1004]: INFO : files: op(16): [started] processing unit "waagent.service" Jul 2 01:49:25.410551 ignition[1004]: INFO : files: op(16): [finished] processing unit "waagent.service" Jul 2 01:49:25.410551 ignition[1004]: INFO : files: op(17): [started] processing unit "containerd.service" Jul 2 01:49:25.487995 kernel: audit: type=1130 audit(1719884965.433:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(17): [finished] processing unit "containerd.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(19): [started] processing unit "prepare-helm.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(1d): [started] setting preset to enabled for "nvidia.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: op(1d): [finished] setting preset to enabled for "nvidia.service" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 01:49:25.488093 ignition[1004]: INFO : files: files passed Jul 2 01:49:25.488093 ignition[1004]: INFO : Ignition finished successfully Jul 2 01:49:25.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.423484 systemd[1]: Finished ignition-files.service. Jul 2 01:49:25.436479 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 01:49:25.707462 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 01:49:25.460158 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 01:49:25.467749 systemd[1]: Starting ignition-quench.service... Jul 2 01:49:25.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.481441 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 01:49:25.493543 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 01:49:25.493645 systemd[1]: Finished ignition-quench.service. Jul 2 01:49:25.510271 systemd[1]: Reached target ignition-complete.target. Jul 2 01:49:25.528918 systemd[1]: Starting initrd-parse-etc.service... Jul 2 01:49:25.563258 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 01:49:25.563373 systemd[1]: Finished initrd-parse-etc.service. Jul 2 01:49:25.580342 systemd[1]: Reached target initrd-fs.target. Jul 2 01:49:25.591487 systemd[1]: Reached target initrd.target. Jul 2 01:49:25.603176 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 01:49:25.604062 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 01:49:25.661066 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 01:49:25.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.675943 systemd[1]: Starting initrd-cleanup.service... Jul 2 01:49:25.695670 systemd[1]: Stopped target nss-lookup.target. Jul 2 01:49:25.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.702335 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 01:49:25.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.712668 systemd[1]: Stopped target timers.target. Jul 2 01:49:25.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.725366 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 01:49:25.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.725479 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 01:49:25.740281 systemd[1]: Stopped target initrd.target. Jul 2 01:49:25.748441 systemd[1]: Stopped target basic.target. Jul 2 01:49:25.929216 ignition[1042]: INFO : Ignition 2.14.0 Jul 2 01:49:25.929216 ignition[1042]: INFO : Stage: umount Jul 2 01:49:25.929216 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:25.929216 ignition[1042]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:25.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.757733 systemd[1]: Stopped target ignition-complete.target. Jul 2 01:49:25.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.986505 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:25.986505 ignition[1042]: INFO : umount: umount passed Jul 2 01:49:25.986505 ignition[1042]: INFO : Ignition finished successfully Jul 2 01:49:25.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.767617 systemd[1]: Stopped target ignition-diskful.target. Jul 2 01:49:26.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.776514 systemd[1]: Stopped target initrd-root-device.target. Jul 2 01:49:26.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.789373 systemd[1]: Stopped target remote-fs.target. Jul 2 01:49:25.798091 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 01:49:26.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.808699 systemd[1]: Stopped target sysinit.target. Jul 2 01:49:25.817428 systemd[1]: Stopped target local-fs.target. Jul 2 01:49:25.825808 systemd[1]: Stopped target local-fs-pre.target. Jul 2 01:49:25.834391 systemd[1]: Stopped target swap.target. Jul 2 01:49:25.845493 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 01:49:25.845625 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 01:49:25.854684 systemd[1]: Stopped target cryptsetup.target. Jul 2 01:49:25.862922 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 01:49:26.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.863027 systemd[1]: Stopped dracut-initqueue.service. Jul 2 01:49:25.871376 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 01:49:25.871471 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 01:49:25.880679 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 01:49:26.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.880769 systemd[1]: Stopped ignition-files.service. Jul 2 01:49:26.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.889551 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 01:49:26.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.161000 audit: BPF prog-id=6 op=UNLOAD Jul 2 01:49:25.889654 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 01:49:25.899283 systemd[1]: Stopping ignition-mount.service... Jul 2 01:49:26.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.918740 systemd[1]: Stopping iscsiuio.service... Jul 2 01:49:26.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.930850 systemd[1]: Stopping sysroot-boot.service... Jul 2 01:49:26.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.948499 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 01:49:25.948721 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 01:49:25.962048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 01:49:25.962150 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 01:49:26.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.983556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 01:49:25.984946 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 01:49:25.985049 systemd[1]: Stopped iscsiuio.service. Jul 2 01:49:26.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.991017 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 01:49:26.287869 kernel: hv_netvsc 0022487c-daa9-0022-487c-daa90022487c eth0: Data path switched from VF: enP11825s1 Jul 2 01:49:26.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:25.991107 systemd[1]: Stopped ignition-mount.service. Jul 2 01:49:26.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.002323 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 01:49:26.002433 systemd[1]: Stopped ignition-disks.service. Jul 2 01:49:26.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.011238 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 01:49:26.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.011328 systemd[1]: Stopped ignition-kargs.service. Jul 2 01:49:26.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.020897 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 01:49:26.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.020994 systemd[1]: Stopped ignition-fetch.service. Jul 2 01:49:26.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.029509 systemd[1]: Stopped target network.target. Jul 2 01:49:26.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.038435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 01:49:26.038546 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 01:49:26.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.047331 systemd[1]: Stopped target paths.target. Jul 2 01:49:26.055421 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 01:49:26.062969 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 01:49:26.068125 systemd[1]: Stopped target slices.target. Jul 2 01:49:26.076199 systemd[1]: Stopped target sockets.target. Jul 2 01:49:26.083850 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 01:49:26.083920 systemd[1]: Closed iscsid.socket. Jul 2 01:49:26.092598 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 01:49:26.092671 systemd[1]: Closed iscsiuio.socket. Jul 2 01:49:26.100412 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 01:49:26.100510 systemd[1]: Stopped ignition-setup.service. Jul 2 01:49:26.109064 systemd[1]: Stopping systemd-networkd.service... Jul 2 01:49:26.118009 systemd[1]: Stopping systemd-resolved.service... Jul 2 01:49:26.132443 systemd-networkd[843]: eth0: DHCPv6 lease lost Jul 2 01:49:26.457000 audit: BPF prog-id=9 op=UNLOAD Jul 2 01:49:26.133437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 01:49:26.133540 systemd[1]: Finished initrd-cleanup.service. Jul 2 01:49:26.144613 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 01:49:26.144722 systemd[1]: Stopped systemd-resolved.service. Jul 2 01:49:26.154073 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 01:49:26.154167 systemd[1]: Stopped systemd-networkd.service. Jul 2 01:49:26.162647 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 01:49:26.162684 systemd[1]: Closed systemd-networkd.socket. Jul 2 01:49:26.499000 audit: BPF prog-id=8 op=UNLOAD Jul 2 01:49:26.172644 systemd[1]: Stopping network-cleanup.service... Jul 2 01:49:26.526748 kernel: kauditd_printk_skb: 42 callbacks suppressed Jul 2 01:49:26.526770 kernel: audit: type=1334 audit(1719884966.499:81): prog-id=8 op=UNLOAD Jul 2 01:49:26.526787 kernel: audit: type=1334 audit(1719884966.504:82): prog-id=7 op=UNLOAD Jul 2 01:49:26.526796 kernel: audit: type=1334 audit(1719884966.506:83): prog-id=5 op=UNLOAD Jul 2 01:49:26.504000 audit: BPF prog-id=7 op=UNLOAD Jul 2 01:49:26.506000 audit: BPF prog-id=5 op=UNLOAD Jul 2 01:49:26.180435 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 01:49:26.545053 kernel: audit: type=1334 audit(1719884966.506:84): prog-id=4 op=UNLOAD Jul 2 01:49:26.545074 kernel: audit: type=1334 audit(1719884966.506:85): prog-id=3 op=UNLOAD Jul 2 01:49:26.506000 audit: BPF prog-id=4 op=UNLOAD Jul 2 01:49:26.506000 audit: BPF prog-id=3 op=UNLOAD Jul 2 01:49:26.180499 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 01:49:26.185299 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 01:49:26.573941 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Jul 2 01:49:26.573983 iscsid[855]: iscsid shutting down. Jul 2 01:49:26.185348 systemd[1]: Stopped systemd-sysctl.service. Jul 2 01:49:26.197160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 01:49:26.197218 systemd[1]: Stopped systemd-modules-load.service. Jul 2 01:49:26.202546 systemd[1]: Stopping systemd-udevd.service... Jul 2 01:49:26.222064 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 01:49:26.233257 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 01:49:26.233410 systemd[1]: Stopped systemd-udevd.service. Jul 2 01:49:26.238149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 01:49:26.238196 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 01:49:26.246768 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 01:49:26.246809 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 01:49:26.256210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 01:49:26.256259 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 01:49:26.265625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 01:49:26.265669 systemd[1]: Stopped dracut-cmdline.service. Jul 2 01:49:26.282748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 01:49:26.282812 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 01:49:26.295258 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 01:49:26.308816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 01:49:26.308903 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 01:49:26.324227 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 01:49:26.324299 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 01:49:26.329427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 01:49:26.329480 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 01:49:26.340706 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 01:49:26.341370 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 01:49:26.341478 systemd[1]: Stopped sysroot-boot.service. Jul 2 01:49:26.348722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 01:49:26.348812 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 01:49:26.359648 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 01:49:26.359697 systemd[1]: Stopped initrd-setup-root.service. Jul 2 01:49:26.372943 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 01:49:26.373045 systemd[1]: Stopped network-cleanup.service. Jul 2 01:49:26.383237 systemd[1]: Reached target initrd-switch-root.target. Jul 2 01:49:26.393768 systemd[1]: Starting initrd-switch-root.service... Jul 2 01:49:26.499626 systemd[1]: Switching root. Jul 2 01:49:26.574979 systemd-journald[276]: Journal stopped Jul 2 01:49:38.201699 kernel: audit: type=1335 audit(1719884966.574:86): pid=276 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe=2F7573722F6C69622F73797374656D642F73797374656D642D6A6F75726E616C64202864656C6574656429 nl-mcgrp=1 op=disconnect res=1 Jul 2 01:49:38.201723 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 01:49:38.201735 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 01:49:38.201744 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 01:49:38.201752 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 01:49:38.201760 kernel: SELinux: policy capability open_perms=1 Jul 2 01:49:38.201769 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 01:49:38.201777 kernel: SELinux: policy capability always_check_network=0 Jul 2 01:49:38.201785 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 01:49:38.201793 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 01:49:38.201803 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 01:49:38.201811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 01:49:38.201819 kernel: audit: type=1403 audit(1719884969.529:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 01:49:38.201829 systemd[1]: Successfully loaded SELinux policy in 294.936ms. Jul 2 01:49:38.201840 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.344ms. Jul 2 01:49:38.201852 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 01:49:38.201863 systemd[1]: Detected virtualization microsoft. Jul 2 01:49:38.201872 systemd[1]: Detected architecture arm64. Jul 2 01:49:38.201882 systemd[1]: Detected first boot. Jul 2 01:49:38.201892 systemd[1]: Hostname set to . Jul 2 01:49:38.201901 systemd[1]: Initializing machine ID from random generator. Jul 2 01:49:38.201911 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 01:49:38.201921 kernel: audit: type=1400 audit(1719884971.439:88): avc: denied { associate } for pid=1094 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 01:49:38.201931 kernel: audit: type=1300 audit(1719884971.439:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000225f4 a1=40000287f8 a2=40000266c0 a3=32 items=0 ppid=1077 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:38.201941 kernel: audit: type=1327 audit(1719884971.439:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 01:49:38.201950 kernel: audit: type=1400 audit(1719884971.448:89): avc: denied { associate } for pid=1094 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 01:49:38.201960 kernel: audit: type=1300 audit(1719884971.448:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000226d9 a2=1ed a3=0 items=2 ppid=1077 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:38.201970 kernel: audit: type=1307 audit(1719884971.448:89): cwd="/" Jul 2 01:49:38.201979 kernel: audit: type=1302 audit(1719884971.448:89): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:38.201989 kernel: audit: type=1302 audit(1719884971.448:89): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:38.201998 kernel: audit: type=1327 audit(1719884971.448:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 01:49:38.202007 systemd[1]: Populated /etc with preset unit settings. Jul 2 01:49:38.202016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:49:38.202028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:49:38.202039 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:49:38.202048 systemd[1]: Queued start job for default target multi-user.target. Jul 2 01:49:38.202057 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 01:49:38.202066 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 01:49:38.202075 systemd[1]: Created slice system-getty.slice. Jul 2 01:49:38.202085 systemd[1]: Created slice system-modprobe.slice. Jul 2 01:49:38.202096 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 01:49:38.202107 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 01:49:38.202117 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 01:49:38.202126 systemd[1]: Created slice user.slice. Jul 2 01:49:38.202135 systemd[1]: Started systemd-ask-password-console.path. Jul 2 01:49:38.202145 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 01:49:38.202154 systemd[1]: Set up automount boot.automount. Jul 2 01:49:38.202164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 01:49:38.202173 systemd[1]: Reached target integritysetup.target. Jul 2 01:49:38.202184 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 01:49:38.202194 systemd[1]: Reached target remote-fs.target. Jul 2 01:49:38.202203 systemd[1]: Reached target slices.target. Jul 2 01:49:38.202212 systemd[1]: Reached target swap.target. Jul 2 01:49:38.202221 systemd[1]: Reached target torcx.target. Jul 2 01:49:38.202231 systemd[1]: Reached target veritysetup.target. Jul 2 01:49:38.202240 systemd[1]: Listening on systemd-coredump.socket. Jul 2 01:49:38.202249 systemd[1]: Listening on systemd-initctl.socket. Jul 2 01:49:38.202260 kernel: audit: type=1400 audit(1719884977.755:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 01:49:38.202270 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 01:49:38.202280 kernel: audit: type=1335 audit(1719884977.755:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 01:49:38.202289 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 01:49:38.202298 systemd[1]: Listening on systemd-journald.socket. Jul 2 01:49:38.202307 systemd[1]: Listening on systemd-networkd.socket. Jul 2 01:49:38.202317 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 01:49:38.202326 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 01:49:38.202338 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 01:49:38.202347 systemd[1]: Mounting dev-hugepages.mount... Jul 2 01:49:38.202357 systemd[1]: Mounting dev-mqueue.mount... Jul 2 01:49:38.202366 systemd[1]: Mounting media.mount... Jul 2 01:49:38.202375 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 01:49:38.202386 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 01:49:38.202397 systemd[1]: Mounting tmp.mount... Jul 2 01:49:38.202407 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 01:49:38.202416 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:38.202426 systemd[1]: Starting kmod-static-nodes.service... Jul 2 01:49:38.202435 systemd[1]: Starting modprobe@configfs.service... Jul 2 01:49:38.202444 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:38.202454 systemd[1]: Starting modprobe@drm.service... Jul 2 01:49:38.202463 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:38.202474 systemd[1]: Starting modprobe@fuse.service... Jul 2 01:49:38.202484 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:38.202494 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 01:49:38.202503 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 01:49:38.202513 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 01:49:38.202522 systemd[1]: Starting systemd-journald.service... Jul 2 01:49:38.202531 kernel: loop: module loaded Jul 2 01:49:38.202540 kernel: fuse: init (API version 7.34) Jul 2 01:49:38.202550 systemd[1]: Starting systemd-modules-load.service... Jul 2 01:49:38.202567 systemd[1]: Starting systemd-network-generator.service... Jul 2 01:49:38.202578 systemd[1]: Starting systemd-remount-fs.service... Jul 2 01:49:38.202588 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 01:49:38.202597 systemd[1]: Mounted dev-hugepages.mount. Jul 2 01:49:38.202607 systemd[1]: Mounted dev-mqueue.mount. Jul 2 01:49:38.202617 systemd[1]: Mounted media.mount. Jul 2 01:49:38.202626 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 01:49:38.202635 kernel: audit: type=1305 audit(1719884978.196:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 01:49:38.202649 systemd-journald[1219]: Journal started Jul 2 01:49:38.202692 systemd-journald[1219]: Runtime Journal (/run/log/journal/38636ed5b7cd47708dfce20a76e0a43d) is 8.0M, max 78.6M, 70.6M free. Jul 2 01:49:37.755000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 01:49:38.196000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 01:49:38.214594 systemd[1]: Started systemd-journald.service. Jul 2 01:49:38.196000 audit[1219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff7de53a0 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:38.244574 kernel: audit: type=1300 audit(1719884978.196:92): arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff7de53a0 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:38.250169 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 01:49:38.254776 systemd[1]: Mounted tmp.mount. Jul 2 01:49:38.258679 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 01:49:38.263644 systemd[1]: Finished kmod-static-nodes.service. Jul 2 01:49:38.196000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 01:49:38.275171 kernel: audit: type=1327 audit(1719884978.196:92): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 01:49:38.275502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 01:49:38.275827 systemd[1]: Finished modprobe@configfs.service. Jul 2 01:49:38.280870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:38.281058 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:38.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.319451 kernel: audit: type=1130 audit(1719884978.248:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.319530 kernel: audit: type=1130 audit(1719884978.262:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.320168 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 01:49:38.320436 kernel: audit: type=1130 audit(1719884978.274:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.321494 systemd[1]: Finished modprobe@drm.service. Jul 2 01:49:38.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.359214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:38.359527 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:38.360168 kernel: audit: type=1130 audit(1719884978.280:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.360209 kernel: audit: type=1131 audit(1719884978.280:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.383175 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 01:49:38.383406 systemd[1]: Finished modprobe@fuse.service. Jul 2 01:49:38.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.388154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:38.388429 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:38.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.393378 systemd[1]: Finished systemd-modules-load.service. Jul 2 01:49:38.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.398787 systemd[1]: Finished systemd-network-generator.service. Jul 2 01:49:38.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.406152 systemd[1]: Finished systemd-remount-fs.service. Jul 2 01:49:38.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.411387 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 01:49:38.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.417393 systemd[1]: Reached target network-pre.target. Jul 2 01:49:38.423404 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 01:49:38.429390 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 01:49:38.433899 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 01:49:38.446821 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 01:49:38.452357 systemd[1]: Starting systemd-journal-flush.service... Jul 2 01:49:38.457125 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:38.458427 systemd[1]: Starting systemd-random-seed.service... Jul 2 01:49:38.463004 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:38.464284 systemd[1]: Starting systemd-sysctl.service... Jul 2 01:49:38.469632 systemd[1]: Starting systemd-sysusers.service... Jul 2 01:49:38.474959 systemd[1]: Starting systemd-udev-settle.service... Jul 2 01:49:38.481676 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 01:49:38.487395 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 01:49:38.494214 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 01:49:38.531943 systemd-journald[1219]: Time spent on flushing to /var/log/journal/38636ed5b7cd47708dfce20a76e0a43d is 13.627ms for 1037 entries. Jul 2 01:49:38.531943 systemd-journald[1219]: System Journal (/var/log/journal/38636ed5b7cd47708dfce20a76e0a43d) is 8.0M, max 2.6G, 2.6G free. Jul 2 01:49:38.639356 systemd-journald[1219]: Received client request to flush runtime journal. Jul 2 01:49:38.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:38.527181 systemd[1]: Finished systemd-random-seed.service. Jul 2 01:49:38.539885 systemd[1]: Reached target first-boot-complete.target. Jul 2 01:49:38.547213 systemd[1]: Finished systemd-sysctl.service. Jul 2 01:49:38.640451 systemd[1]: Finished systemd-journal-flush.service. Jul 2 01:49:38.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:39.080591 systemd[1]: Finished systemd-sysusers.service. Jul 2 01:49:39.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:39.086618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 01:49:39.389049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 01:49:39.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:39.763553 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 01:49:39.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:39.769866 systemd[1]: Starting systemd-udevd.service... Jul 2 01:49:39.789015 systemd-udevd[1254]: Using default interface naming scheme 'v252'. Jul 2 01:49:40.223662 systemd[1]: Started systemd-udevd.service. Jul 2 01:49:40.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:40.234713 systemd[1]: Starting systemd-networkd.service... Jul 2 01:49:40.267641 systemd[1]: Found device dev-ttyAMA0.device. Jul 2 01:49:40.311110 systemd[1]: Starting systemd-userdbd.service... Jul 2 01:49:40.336000 audit[1263]: AVC avc: denied { confidentiality } for pid=1263 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 01:49:40.384017 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 01:49:40.384148 kernel: hv_vmbus: registering driver hv_balloon Jul 2 01:49:40.384180 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 01:49:40.384200 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 01:49:40.384229 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 2 01:49:40.384256 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 01:49:40.391059 systemd[1]: Started systemd-userdbd.service. Jul 2 01:49:40.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:40.402702 kernel: Console: switching to colour dummy device 80x25 Jul 2 01:49:40.414839 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 01:49:40.336000 audit[1263]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab006f54b0 a1=aa2c a2=ffff8c9d24b0 a3=aaab00653010 items=12 ppid=1254 pid=1263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:40.336000 audit: CWD cwd="/" Jul 2 01:49:40.336000 audit: PATH item=0 name=(null) inode=6385 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=1 name=(null) inode=11390 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=2 name=(null) inode=11390 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=3 name=(null) inode=11391 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=4 name=(null) inode=11390 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=5 name=(null) inode=11392 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=6 name=(null) inode=11390 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=7 name=(null) inode=11393 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=8 name=(null) inode=11390 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=9 name=(null) inode=11394 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=10 name=(null) inode=11390 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PATH item=11 name=(null) inode=11395 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:40.336000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 01:49:40.436437 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 01:49:40.436537 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 01:49:40.436556 kernel: hv_vmbus: registering driver hv_utils Jul 2 01:49:40.437591 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 01:49:40.441612 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 01:49:40.095748 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 01:49:40.147915 systemd-journald[1219]: Time jumped backwards, rotating. Jul 2 01:49:40.326687 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1267) Jul 2 01:49:40.346606 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Jul 2 01:49:40.347312 systemd[1]: Finished systemd-udev-settle.service. Jul 2 01:49:40.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:40.354170 systemd[1]: Starting lvm2-activation-early.service... Jul 2 01:49:40.509250 systemd-networkd[1275]: lo: Link UP Jul 2 01:49:40.509264 systemd-networkd[1275]: lo: Gained carrier Jul 2 01:49:40.509712 systemd-networkd[1275]: Enumeration completed Jul 2 01:49:40.509859 systemd[1]: Started systemd-networkd.service. Jul 2 01:49:40.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:40.516120 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 01:49:40.539346 systemd-networkd[1275]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 01:49:40.592618 kernel: mlx5_core 2e31:00:02.0 enP11825s1: Link up Jul 2 01:49:40.618381 systemd-networkd[1275]: enP11825s1: Link UP Jul 2 01:49:40.618616 kernel: hv_netvsc 0022487c-daa9-0022-487c-daa90022487c eth0: Data path switched to VF: enP11825s1 Jul 2 01:49:40.618850 systemd-networkd[1275]: eth0: Link UP Jul 2 01:49:40.618908 systemd-networkd[1275]: eth0: Gained carrier Jul 2 01:49:40.622896 systemd-networkd[1275]: enP11825s1: Gained carrier Jul 2 01:49:40.634715 systemd-networkd[1275]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 01:49:40.636519 lvm[1332]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 01:49:40.676665 systemd[1]: Finished lvm2-activation-early.service. Jul 2 01:49:40.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:40.681833 systemd[1]: Reached target cryptsetup.target. Jul 2 01:49:40.687708 systemd[1]: Starting lvm2-activation.service... Jul 2 01:49:40.692253 lvm[1335]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 01:49:40.717636 systemd[1]: Finished lvm2-activation.service. Jul 2 01:49:40.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:40.722225 systemd[1]: Reached target local-fs-pre.target. Jul 2 01:49:40.726771 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 01:49:40.726798 systemd[1]: Reached target local-fs.target. Jul 2 01:49:40.730909 systemd[1]: Reached target machines.target. Jul 2 01:49:40.736559 systemd[1]: Starting ldconfig.service... Jul 2 01:49:40.755285 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:40.755356 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:40.756620 systemd[1]: Starting systemd-boot-update.service... Jul 2 01:49:40.762067 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 01:49:40.769027 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 01:49:40.775038 systemd[1]: Starting systemd-sysext.service... Jul 2 01:49:40.793168 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1338 (bootctl) Jul 2 01:49:40.794393 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 01:49:40.829016 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 01:49:40.834413 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 01:49:40.834729 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 01:49:42.005617 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 01:49:42.014160 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 01:49:42.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.034624 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 01:49:42.050621 kernel: loop1: detected capacity change from 0 to 193208 Jul 2 01:49:42.054805 (sd-sysext)[1354]: Using extensions 'kubernetes'. Jul 2 01:49:42.055429 (sd-sysext)[1354]: Merged extensions into '/usr'. Jul 2 01:49:42.073910 systemd[1]: Mounting usr-share-oem.mount... Jul 2 01:49:42.077887 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.079186 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:42.084335 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:42.091356 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:42.098959 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.099122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:42.099749 systemd-networkd[1275]: eth0: Gained IPv6LL Jul 2 01:49:42.101709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 01:49:42.103374 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 01:49:42.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.109418 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 01:49:42.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.114939 systemd[1]: Mounted usr-share-oem.mount. Jul 2 01:49:42.119262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:42.119433 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:42.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.124829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:42.124987 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:42.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.130019 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:42.130189 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:42.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.135248 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:42.135350 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.136351 systemd[1]: Finished systemd-sysext.service. Jul 2 01:49:42.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.142748 systemd[1]: Starting ensure-sysext.service... Jul 2 01:49:42.147912 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 01:49:42.156706 systemd[1]: Reloading. Jul 2 01:49:42.202775 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 01:49:42.206383 /usr/lib/systemd/system-generators/torcx-generator[1390]: time="2024-07-02T01:49:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:49:42.207412 /usr/lib/systemd/system-generators/torcx-generator[1390]: time="2024-07-02T01:49:42Z" level=info msg="torcx already run" Jul 2 01:49:42.221103 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 01:49:42.236379 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 01:49:42.237721 systemd-fsck[1352]: fsck.fat 4.2 (2021-01-31) Jul 2 01:49:42.237721 systemd-fsck[1352]: /dev/sda1: 236 files, 117047/258078 clusters Jul 2 01:49:42.310609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:49:42.310627 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:49:42.328511 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:49:42.392568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 01:49:42.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.404646 systemd[1]: Mounting boot.mount... Jul 2 01:49:42.412951 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.414349 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:42.420019 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:42.425829 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:42.429787 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.429960 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:42.433384 systemd[1]: Mounted boot.mount. Jul 2 01:49:42.437581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:42.437779 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:42.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.446460 kernel: kauditd_printk_skb: 53 callbacks suppressed Jul 2 01:49:42.446571 kernel: audit: type=1130 audit(1719884982.441:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.463035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:42.463227 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:42.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.483736 kernel: audit: type=1131 audit(1719884982.461:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.484377 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:42.484569 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:42.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.504857 kernel: audit: type=1130 audit(1719884982.482:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.505522 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:42.505623 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.523547 kernel: audit: type=1131 audit(1719884982.482:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.509009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.513854 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:42.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.544198 kernel: audit: type=1130 audit(1719884982.504:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.544787 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:42.567408 kernel: audit: type=1131 audit(1719884982.504:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.570837 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:42.574884 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.575038 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:42.577142 systemd[1]: Finished systemd-boot-update.service. Jul 2 01:49:42.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.582863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:42.583006 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:42.604423 kernel: audit: type=1130 audit(1719884982.580:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.605464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:42.605746 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:42.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.644079 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:42.644376 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:42.644807 kernel: audit: type=1130 audit(1719884982.603:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.644912 kernel: audit: type=1131 audit(1719884982.603:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.666040 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:42.666281 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.666764 kernel: audit: type=1130 audit(1719884982.642:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.670140 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.671573 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:42.677329 systemd[1]: Starting modprobe@drm.service... Jul 2 01:49:42.682927 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:42.691330 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:42.695830 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.696102 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:42.697211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:42.697487 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:42.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.702799 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 01:49:42.703068 systemd[1]: Finished modprobe@drm.service. Jul 2 01:49:42.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.708206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:42.708464 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:42.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.714326 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:42.714731 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:42.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.720180 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:42.720395 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:42.721695 systemd[1]: Finished ensure-sysext.service. Jul 2 01:49:42.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.982319 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 01:49:42.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:42.989397 systemd[1]: Starting audit-rules.service... Jul 2 01:49:42.994937 systemd[1]: Starting clean-ca-certificates.service... Jul 2 01:49:43.001145 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 01:49:43.008414 systemd[1]: Starting systemd-resolved.service... Jul 2 01:49:43.015129 systemd[1]: Starting systemd-timesyncd.service... Jul 2 01:49:43.021478 systemd[1]: Starting systemd-update-utmp.service... Jul 2 01:49:43.026521 systemd[1]: Finished clean-ca-certificates.service. Jul 2 01:49:43.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:43.031852 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 01:49:43.057000 audit[1496]: SYSTEM_BOOT pid=1496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 01:49:43.061872 systemd[1]: Finished systemd-update-utmp.service. Jul 2 01:49:43.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:43.151979 systemd[1]: Started systemd-timesyncd.service. Jul 2 01:49:43.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:43.157210 systemd[1]: Reached target time-set.target. Jul 2 01:49:43.204995 systemd-resolved[1493]: Positive Trust Anchors: Jul 2 01:49:43.205331 systemd-resolved[1493]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 01:49:43.205411 systemd-resolved[1493]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 01:49:43.208620 systemd-resolved[1493]: Using system hostname 'ci-3510.3.5-a-637f296955'. Jul 2 01:49:43.210166 systemd[1]: Started systemd-resolved.service. Jul 2 01:49:43.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:43.215120 systemd[1]: Reached target network.target. Jul 2 01:49:43.219370 systemd[1]: Reached target network-online.target. Jul 2 01:49:43.224539 systemd[1]: Reached target nss-lookup.target. Jul 2 01:49:43.229984 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 01:49:43.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:43.365000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 01:49:43.365000 audit[1513]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcd08b320 a2=420 a3=0 items=0 ppid=1489 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:43.365000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 01:49:43.390554 augenrules[1513]: No rules Jul 2 01:49:43.391636 systemd[1]: Finished audit-rules.service. Jul 2 01:49:43.481398 systemd-timesyncd[1495]: Contacted time server 148.135.68.31:123 (0.flatcar.pool.ntp.org). Jul 2 01:49:43.481494 systemd-timesyncd[1495]: Initial clock synchronization to Tue 2024-07-02 01:49:43.490194 UTC. Jul 2 01:49:49.518153 ldconfig[1337]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 01:49:49.531813 systemd[1]: Finished ldconfig.service. Jul 2 01:49:49.538261 systemd[1]: Starting systemd-update-done.service... Jul 2 01:49:49.562201 systemd[1]: Finished systemd-update-done.service. Jul 2 01:49:49.567669 systemd[1]: Reached target sysinit.target. Jul 2 01:49:49.571987 systemd[1]: Started motdgen.path. Jul 2 01:49:49.575888 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 01:49:49.582161 systemd[1]: Started logrotate.timer. Jul 2 01:49:49.586084 systemd[1]: Started mdadm.timer. Jul 2 01:49:49.589697 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 01:49:49.594293 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 01:49:49.594327 systemd[1]: Reached target paths.target. Jul 2 01:49:49.598555 systemd[1]: Reached target timers.target. Jul 2 01:49:49.604670 systemd[1]: Listening on dbus.socket. Jul 2 01:49:49.610056 systemd[1]: Starting docker.socket... Jul 2 01:49:49.615152 systemd[1]: Listening on sshd.socket. Jul 2 01:49:49.619315 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:49.619812 systemd[1]: Listening on docker.socket. Jul 2 01:49:49.623991 systemd[1]: Reached target sockets.target. Jul 2 01:49:49.628369 systemd[1]: Reached target basic.target. Jul 2 01:49:49.632660 systemd[1]: System is tainted: cgroupsv1 Jul 2 01:49:49.632717 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 01:49:49.632743 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 01:49:49.633969 systemd[1]: Starting containerd.service... Jul 2 01:49:49.639189 systemd[1]: Starting dbus.service... Jul 2 01:49:49.643865 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 01:49:49.649655 systemd[1]: Starting extend-filesystems.service... Jul 2 01:49:49.653945 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 01:49:49.680983 systemd[1]: Starting kubelet.service... Jul 2 01:49:49.686100 systemd[1]: Starting motdgen.service... Jul 2 01:49:49.691174 systemd[1]: Started nvidia.service. Jul 2 01:49:49.696516 systemd[1]: Starting prepare-helm.service... Jul 2 01:49:49.701568 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 01:49:49.707549 systemd[1]: Starting sshd-keygen.service... Jul 2 01:49:49.714345 systemd[1]: Starting systemd-logind.service... Jul 2 01:49:49.718593 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:49.718682 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 01:49:49.720016 systemd[1]: Starting update-engine.service... Jul 2 01:49:49.725692 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 01:49:49.735176 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 01:49:49.735572 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 01:49:49.773402 extend-filesystems[1528]: Found loop1 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda1 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda2 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda3 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found usr Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda4 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda6 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda7 Jul 2 01:49:49.778130 extend-filesystems[1528]: Found sda9 Jul 2 01:49:49.778130 extend-filesystems[1528]: Checking size of /dev/sda9 Jul 2 01:49:49.854181 jq[1548]: true Jul 2 01:49:49.800016 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 01:49:49.872046 jq[1527]: false Jul 2 01:49:49.800298 systemd[1]: Finished motdgen.service. Jul 2 01:49:49.818787 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 01:49:49.872422 jq[1566]: true Jul 2 01:49:49.819048 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 01:49:49.839704 systemd-logind[1543]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 2 01:49:49.839934 systemd-logind[1543]: New seat seat0. Jul 2 01:49:49.881772 tar[1551]: linux-arm64/helm Jul 2 01:49:49.887336 extend-filesystems[1528]: Old size kept for /dev/sda9 Jul 2 01:49:49.887336 extend-filesystems[1528]: Found sr0 Jul 2 01:49:49.892690 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 01:49:49.892959 systemd[1]: Finished extend-filesystems.service. Jul 2 01:49:49.933625 env[1557]: time="2024-07-02T01:49:49.932721827Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 01:49:50.112584 env[1557]: time="2024-07-02T01:49:50.031862503Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 01:49:50.101442 dbus-daemon[1526]: [system] SELinux support is enabled Jul 2 01:49:50.077090 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 01:49:50.108126 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 01:49:50.101640 systemd[1]: Started dbus.service. Jul 2 01:49:50.107588 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 01:49:50.107632 systemd[1]: Reached target system-config.target. Jul 2 01:49:50.115164 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Jul 2 01:49:50.115370 env[1557]: time="2024-07-02T01:49:50.115318154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:49:50.116372 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 01:49:50.116396 systemd[1]: Reached target user-config.target. Jul 2 01:49:50.121957 env[1557]: time="2024-07-02T01:49:50.121766320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 01:49:50.121957 env[1557]: time="2024-07-02T01:49:50.121806815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:49:50.122107 env[1557]: time="2024-07-02T01:49:50.122077118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 01:49:50.122107 env[1557]: time="2024-07-02T01:49:50.122101087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 01:49:50.122158 env[1557]: time="2024-07-02T01:49:50.122114492Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 01:49:50.122158 env[1557]: time="2024-07-02T01:49:50.122123375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 01:49:50.122222 env[1557]: time="2024-07-02T01:49:50.122196523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:49:50.124924 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 01:49:50.130930 env[1557]: time="2024-07-02T01:49:50.130896623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:49:50.131160 env[1557]: time="2024-07-02T01:49:50.131130871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 01:49:50.131160 env[1557]: time="2024-07-02T01:49:50.131154640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 01:49:50.131249 env[1557]: time="2024-07-02T01:49:50.131228388Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 01:49:50.131249 env[1557]: time="2024-07-02T01:49:50.131245035Z" level=info msg="metadata content store policy set" policy=shared Jul 2 01:49:50.133534 systemd[1]: Started systemd-logind.service. Jul 2 01:49:50.151261 env[1557]: time="2024-07-02T01:49:50.151222011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 01:49:50.151261 env[1557]: time="2024-07-02T01:49:50.151265548Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 01:49:50.151405 env[1557]: time="2024-07-02T01:49:50.151279873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 01:49:50.151405 env[1557]: time="2024-07-02T01:49:50.151313926Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151405 env[1557]: time="2024-07-02T01:49:50.151331053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151405 env[1557]: time="2024-07-02T01:49:50.151344298Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151405 env[1557]: time="2024-07-02T01:49:50.151356862Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151749 env[1557]: time="2024-07-02T01:49:50.151729444Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151781 env[1557]: time="2024-07-02T01:49:50.151754173Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151781 env[1557]: time="2024-07-02T01:49:50.151774021Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151817 env[1557]: time="2024-07-02T01:49:50.151786465Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.151817 env[1557]: time="2024-07-02T01:49:50.151800150Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 01:49:50.151931 env[1557]: time="2024-07-02T01:49:50.151910952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 01:49:50.152024 env[1557]: time="2024-07-02T01:49:50.152006349Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 01:49:50.152320 env[1557]: time="2024-07-02T01:49:50.152299140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 01:49:50.152361 env[1557]: time="2024-07-02T01:49:50.152328871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152361 env[1557]: time="2024-07-02T01:49:50.152341996Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 01:49:50.152401 env[1557]: time="2024-07-02T01:49:50.152384732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152420 env[1557]: time="2024-07-02T01:49:50.152397977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152420 env[1557]: time="2024-07-02T01:49:50.152410262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152459 env[1557]: time="2024-07-02T01:49:50.152421906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152459 env[1557]: time="2024-07-02T01:49:50.152433671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152459 env[1557]: time="2024-07-02T01:49:50.152445995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152516 env[1557]: time="2024-07-02T01:49:50.152457920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152516 env[1557]: time="2024-07-02T01:49:50.152469484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152516 env[1557]: time="2024-07-02T01:49:50.152485370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 01:49:50.152633 env[1557]: time="2024-07-02T01:49:50.152612338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152675 env[1557]: time="2024-07-02T01:49:50.152635827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152675 env[1557]: time="2024-07-02T01:49:50.152648912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152675 env[1557]: time="2024-07-02T01:49:50.152660237Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 01:49:50.152730 env[1557]: time="2024-07-02T01:49:50.152673962Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 01:49:50.152730 env[1557]: time="2024-07-02T01:49:50.152686887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 01:49:50.152730 env[1557]: time="2024-07-02T01:49:50.152704934Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 01:49:50.152782 env[1557]: time="2024-07-02T01:49:50.152739347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 01:49:50.152997 env[1557]: time="2024-07-02T01:49:50.152931339Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 01:49:50.152997 env[1557]: time="2024-07-02T01:49:50.152990402Z" level=info msg="Connect containerd service" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.153024295Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.153558777Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.153810993Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.153847687Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158565116Z" level=info msg="containerd successfully booted in 0.226900s" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158652509Z" level=info msg="Start subscribing containerd event" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158695846Z" level=info msg="Start recovering state" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158760910Z" level=info msg="Start event monitor" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158780038Z" level=info msg="Start snapshots syncer" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158791522Z" level=info msg="Start cni network conf syncer for default" Jul 2 01:49:50.171784 env[1557]: time="2024-07-02T01:49:50.158798845Z" level=info msg="Start streaming server" Jul 2 01:49:50.153968 systemd[1]: Started containerd.service. Jul 2 01:49:50.457834 tar[1551]: linux-arm64/LICENSE Jul 2 01:49:50.457926 tar[1551]: linux-arm64/README.md Jul 2 01:49:50.462173 systemd[1]: Finished prepare-helm.service. Jul 2 01:49:50.559785 update_engine[1545]: I0702 01:49:50.539989 1545 main.cc:92] Flatcar Update Engine starting Jul 2 01:49:50.602471 systemd[1]: Started update-engine.service. Jul 2 01:49:50.603870 update_engine[1545]: I0702 01:49:50.603758 1545 update_check_scheduler.cc:74] Next update check in 8m32s Jul 2 01:49:50.609203 systemd[1]: Started locksmithd.service. Jul 2 01:49:50.692162 systemd[1]: Started kubelet.service. Jul 2 01:49:51.157281 kubelet[1648]: E0702 01:49:51.157201 1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:49:51.159255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:49:51.159404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:49:51.946679 locksmithd[1643]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 01:49:53.481458 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 01:49:53.507045 systemd[1]: Finished sshd-keygen.service. Jul 2 01:49:53.513646 systemd[1]: Starting issuegen.service... Jul 2 01:49:53.518534 systemd[1]: Started waagent.service. Jul 2 01:49:53.523702 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 01:49:53.524133 systemd[1]: Finished issuegen.service. Jul 2 01:49:53.529978 systemd[1]: Starting systemd-user-sessions.service... Jul 2 01:49:53.578126 systemd[1]: Finished systemd-user-sessions.service. Jul 2 01:49:53.584954 systemd[1]: Started getty@tty1.service. Jul 2 01:49:53.590809 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 2 01:49:53.595917 systemd[1]: Reached target getty.target. Jul 2 01:49:53.600183 systemd[1]: Reached target multi-user.target. Jul 2 01:49:53.606220 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 01:49:53.618803 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 01:49:53.619059 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 01:49:53.625919 systemd[1]: Startup finished in 15.119s (kernel) + 24.982s (userspace) = 40.101s. Jul 2 01:49:54.186078 login[1677]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jul 2 01:49:54.201726 login[1678]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 01:49:54.273426 systemd[1]: Created slice user-500.slice. Jul 2 01:49:54.274449 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 01:49:54.276946 systemd-logind[1543]: New session 1 of user core. Jul 2 01:49:54.312459 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 01:49:54.314101 systemd[1]: Starting user@500.service... Jul 2 01:49:54.371039 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:49:54.617381 systemd[1684]: Queued start job for default target default.target. Jul 2 01:49:54.617966 systemd[1684]: Reached target paths.target. Jul 2 01:49:54.617989 systemd[1684]: Reached target sockets.target. Jul 2 01:49:54.618000 systemd[1684]: Reached target timers.target. Jul 2 01:49:54.618010 systemd[1684]: Reached target basic.target. Jul 2 01:49:54.618132 systemd[1]: Started user@500.service. Jul 2 01:49:54.618665 systemd[1684]: Reached target default.target. Jul 2 01:49:54.618707 systemd[1684]: Startup finished in 241ms. Jul 2 01:49:54.618968 systemd[1]: Started session-1.scope. Jul 2 01:49:55.186417 login[1677]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 01:49:55.190388 systemd-logind[1543]: New session 2 of user core. Jul 2 01:49:55.190780 systemd[1]: Started session-2.scope. Jul 2 01:49:59.197404 waagent[1673]: 2024-07-02T01:49:59.197302Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 2 01:49:59.204230 waagent[1673]: 2024-07-02T01:49:59.204142Z INFO Daemon Daemon OS: flatcar 3510.3.5 Jul 2 01:49:59.209021 waagent[1673]: 2024-07-02T01:49:59.208945Z INFO Daemon Daemon Python: 3.9.16 Jul 2 01:49:59.213664 waagent[1673]: 2024-07-02T01:49:59.213543Z INFO Daemon Daemon Run daemon Jul 2 01:49:59.218270 waagent[1673]: 2024-07-02T01:49:59.218199Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.5' Jul 2 01:49:59.235003 waagent[1673]: 2024-07-02T01:49:59.234878Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 01:49:59.250053 waagent[1673]: 2024-07-02T01:49:59.249924Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 01:49:59.259932 waagent[1673]: 2024-07-02T01:49:59.259863Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 01:49:59.264997 waagent[1673]: 2024-07-02T01:49:59.264938Z INFO Daemon Daemon Using waagent for provisioning Jul 2 01:49:59.270894 waagent[1673]: 2024-07-02T01:49:59.270833Z INFO Daemon Daemon Activate resource disk Jul 2 01:49:59.275729 waagent[1673]: 2024-07-02T01:49:59.275672Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 01:49:59.290067 waagent[1673]: 2024-07-02T01:49:59.290009Z INFO Daemon Daemon Found device: None Jul 2 01:49:59.294734 waagent[1673]: 2024-07-02T01:49:59.294674Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 01:49:59.303117 waagent[1673]: 2024-07-02T01:49:59.303060Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 01:49:59.315462 waagent[1673]: 2024-07-02T01:49:59.315403Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 01:49:59.321629 waagent[1673]: 2024-07-02T01:49:59.321561Z INFO Daemon Daemon Running default provisioning handler Jul 2 01:49:59.334436 waagent[1673]: 2024-07-02T01:49:59.334298Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 01:49:59.349590 waagent[1673]: 2024-07-02T01:49:59.349445Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 01:49:59.359526 waagent[1673]: 2024-07-02T01:49:59.359438Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 01:49:59.364678 waagent[1673]: 2024-07-02T01:49:59.364607Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 01:49:59.423319 waagent[1673]: 2024-07-02T01:49:59.423079Z INFO Daemon Daemon Successfully mounted dvd Jul 2 01:49:59.491557 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 01:49:59.525945 waagent[1673]: 2024-07-02T01:49:59.525802Z INFO Daemon Daemon Detect protocol endpoint Jul 2 01:49:59.530931 waagent[1673]: 2024-07-02T01:49:59.530850Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 01:49:59.536771 waagent[1673]: 2024-07-02T01:49:59.536688Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 01:49:59.543519 waagent[1673]: 2024-07-02T01:49:59.543446Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 01:49:59.549120 waagent[1673]: 2024-07-02T01:49:59.549055Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 01:49:59.554276 waagent[1673]: 2024-07-02T01:49:59.554212Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 01:49:59.721383 waagent[1673]: 2024-07-02T01:49:59.721320Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 01:49:59.728395 waagent[1673]: 2024-07-02T01:49:59.728350Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 01:49:59.733924 waagent[1673]: 2024-07-02T01:49:59.733848Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 01:50:00.655279 waagent[1673]: 2024-07-02T01:50:00.655126Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 01:50:00.675338 waagent[1673]: 2024-07-02T01:50:00.675258Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 2 01:50:00.681115 waagent[1673]: 2024-07-02T01:50:00.681046Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 2 01:50:00.755749 waagent[1673]: 2024-07-02T01:50:00.755617Z INFO Daemon Daemon Found private key matching thumbprint 7BB49B77886E69ECFE71A87ABD537418BE933108 Jul 2 01:50:00.764112 waagent[1673]: 2024-07-02T01:50:00.764027Z INFO Daemon Daemon Certificate with thumbprint 8FEE69FFBAFA37E44003AC684A2112A42F05A91B has no matching private key. Jul 2 01:50:00.773837 waagent[1673]: 2024-07-02T01:50:00.773758Z INFO Daemon Daemon Fetch goal state completed Jul 2 01:50:00.817944 waagent[1673]: 2024-07-02T01:50:00.817889Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 74cf573c-dc5f-4e7b-984a-60b198b9a8ec New eTag: 4783414783237914212] Jul 2 01:50:00.828807 waagent[1673]: 2024-07-02T01:50:00.828739Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 01:50:00.844144 waagent[1673]: 2024-07-02T01:50:00.844071Z INFO Daemon Daemon Starting provisioning Jul 2 01:50:00.849172 waagent[1673]: 2024-07-02T01:50:00.849109Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 01:50:00.853826 waagent[1673]: 2024-07-02T01:50:00.853772Z INFO Daemon Daemon Set hostname [ci-3510.3.5-a-637f296955] Jul 2 01:50:00.909584 waagent[1673]: 2024-07-02T01:50:00.909442Z INFO Daemon Daemon Publish hostname [ci-3510.3.5-a-637f296955] Jul 2 01:50:00.916203 waagent[1673]: 2024-07-02T01:50:00.916125Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 01:50:00.922728 waagent[1673]: 2024-07-02T01:50:00.922658Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 01:50:00.938987 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 2 01:50:00.939201 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 2 01:50:00.939253 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 2 01:50:00.939449 systemd[1]: Stopping systemd-networkd.service... Jul 2 01:50:00.946646 systemd-networkd[1275]: eth0: DHCPv6 lease lost Jul 2 01:50:00.947969 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 01:50:00.948222 systemd[1]: Stopped systemd-networkd.service. Jul 2 01:50:00.950367 systemd[1]: Starting systemd-networkd.service... Jul 2 01:50:00.984572 systemd-networkd[1730]: enP11825s1: Link UP Jul 2 01:50:00.984584 systemd-networkd[1730]: enP11825s1: Gained carrier Jul 2 01:50:00.985571 systemd-networkd[1730]: eth0: Link UP Jul 2 01:50:00.985583 systemd-networkd[1730]: eth0: Gained carrier Jul 2 01:50:00.986028 systemd-networkd[1730]: lo: Link UP Jul 2 01:50:00.986038 systemd-networkd[1730]: lo: Gained carrier Jul 2 01:50:00.986273 systemd-networkd[1730]: eth0: Gained IPv6LL Jul 2 01:50:00.986481 systemd-networkd[1730]: Enumeration completed Jul 2 01:50:00.986674 systemd[1]: Started systemd-networkd.service. Jul 2 01:50:00.988524 systemd-networkd[1730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 01:50:00.988617 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 01:50:00.991477 waagent[1673]: 2024-07-02T01:50:00.991121Z INFO Daemon Daemon Create user account if not exists Jul 2 01:50:00.997201 waagent[1673]: 2024-07-02T01:50:00.997108Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 01:50:01.003025 waagent[1673]: 2024-07-02T01:50:01.002943Z INFO Daemon Daemon Configure sudoer Jul 2 01:50:01.008419 waagent[1673]: 2024-07-02T01:50:01.008293Z INFO Daemon Daemon Configure sshd Jul 2 01:50:01.012756 waagent[1673]: 2024-07-02T01:50:01.012677Z INFO Daemon Daemon Deploy ssh public key. Jul 2 01:50:01.022708 systemd-networkd[1730]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 01:50:01.038126 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 01:50:01.257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 01:50:01.257532 systemd[1]: Stopped kubelet.service. Jul 2 01:50:01.258992 systemd[1]: Starting kubelet.service... Jul 2 01:50:01.340218 systemd[1]: Started kubelet.service. Jul 2 01:50:01.895614 kubelet[1748]: E0702 01:50:01.895559 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:01.898259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:01.898406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:02.305088 waagent[1673]: 2024-07-02T01:50:02.300210Z INFO Daemon Daemon Provisioning complete Jul 2 01:50:02.322301 waagent[1673]: 2024-07-02T01:50:02.322231Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 01:50:02.328604 waagent[1673]: 2024-07-02T01:50:02.328513Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 01:50:02.339194 waagent[1673]: 2024-07-02T01:50:02.339111Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 2 01:50:02.638781 waagent[1755]: 2024-07-02T01:50:02.638634Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 2 01:50:02.639873 waagent[1755]: 2024-07-02T01:50:02.639813Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:02.640125 waagent[1755]: 2024-07-02T01:50:02.640077Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:02.657893 waagent[1755]: 2024-07-02T01:50:02.657796Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 2 01:50:02.658272 waagent[1755]: 2024-07-02T01:50:02.658218Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 2 01:50:02.733535 waagent[1755]: 2024-07-02T01:50:02.733400Z INFO ExtHandler ExtHandler Found private key matching thumbprint 7BB49B77886E69ECFE71A87ABD537418BE933108 Jul 2 01:50:02.733924 waagent[1755]: 2024-07-02T01:50:02.733871Z INFO ExtHandler ExtHandler Certificate with thumbprint 8FEE69FFBAFA37E44003AC684A2112A42F05A91B has no matching private key. Jul 2 01:50:02.734235 waagent[1755]: 2024-07-02T01:50:02.734187Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 2 01:50:02.748429 waagent[1755]: 2024-07-02T01:50:02.748372Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: b6296b8a-fc07-4800-adc5-9e402f0a50b3 New eTag: 4783414783237914212] Jul 2 01:50:02.749172 waagent[1755]: 2024-07-02T01:50:02.749116Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 01:50:02.826262 waagent[1755]: 2024-07-02T01:50:02.826131Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 01:50:02.851808 waagent[1755]: 2024-07-02T01:50:02.851718Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1755 Jul 2 01:50:02.855916 waagent[1755]: 2024-07-02T01:50:02.855832Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 01:50:02.857434 waagent[1755]: 2024-07-02T01:50:02.857362Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 01:50:02.986369 waagent[1755]: 2024-07-02T01:50:02.986266Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 01:50:02.986932 waagent[1755]: 2024-07-02T01:50:02.986877Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 01:50:02.994808 waagent[1755]: 2024-07-02T01:50:02.994759Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 01:50:02.995369 waagent[1755]: 2024-07-02T01:50:02.995315Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 01:50:02.996594 waagent[1755]: 2024-07-02T01:50:02.996532Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 2 01:50:02.998000 waagent[1755]: 2024-07-02T01:50:02.997933Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 01:50:02.998266 waagent[1755]: 2024-07-02T01:50:02.998199Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:02.998851 waagent[1755]: 2024-07-02T01:50:02.998778Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:02.999423 waagent[1755]: 2024-07-02T01:50:02.999360Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 01:50:03.000054 waagent[1755]: 2024-07-02T01:50:02.999989Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 01:50:03.000222 waagent[1755]: 2024-07-02T01:50:03.000159Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 01:50:03.000222 waagent[1755]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 01:50:03.000222 waagent[1755]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 01:50:03.000222 waagent[1755]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 01:50:03.000222 waagent[1755]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:03.000222 waagent[1755]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:03.000222 waagent[1755]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:03.002466 waagent[1755]: 2024-07-02T01:50:03.002320Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 01:50:03.003001 waagent[1755]: 2024-07-02T01:50:03.002932Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:03.003321 waagent[1755]: 2024-07-02T01:50:03.003253Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 01:50:03.003558 waagent[1755]: 2024-07-02T01:50:03.003502Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:03.004363 waagent[1755]: 2024-07-02T01:50:03.004300Z INFO EnvHandler ExtHandler Configure routes Jul 2 01:50:03.004975 waagent[1755]: 2024-07-02T01:50:03.004898Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 01:50:03.005188 waagent[1755]: 2024-07-02T01:50:03.005113Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 01:50:03.005433 waagent[1755]: 2024-07-02T01:50:03.005372Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 01:50:03.006141 waagent[1755]: 2024-07-02T01:50:03.006081Z INFO EnvHandler ExtHandler Gateway:None Jul 2 01:50:03.009639 waagent[1755]: 2024-07-02T01:50:03.009556Z INFO EnvHandler ExtHandler Routes:None Jul 2 01:50:03.018759 waagent[1755]: 2024-07-02T01:50:03.018699Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 2 01:50:03.019454 waagent[1755]: 2024-07-02T01:50:03.019405Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 01:50:03.020452 waagent[1755]: 2024-07-02T01:50:03.020399Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 2 01:50:03.068774 waagent[1755]: 2024-07-02T01:50:03.068638Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1730' Jul 2 01:50:03.096206 waagent[1755]: 2024-07-02T01:50:03.096139Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 2 01:50:03.183563 waagent[1755]: 2024-07-02T01:50:03.183432Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 01:50:03.183563 waagent[1755]: Executing ['ip', '-a', '-o', 'link']: Jul 2 01:50:03.183563 waagent[1755]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 01:50:03.183563 waagent[1755]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:a9 brd ff:ff:ff:ff:ff:ff Jul 2 01:50:03.183563 waagent[1755]: 3: enP11825s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:a9 brd ff:ff:ff:ff:ff:ff\ altname enP11825p0s2 Jul 2 01:50:03.183563 waagent[1755]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 01:50:03.183563 waagent[1755]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 01:50:03.183563 waagent[1755]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 01:50:03.183563 waagent[1755]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 01:50:03.183563 waagent[1755]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 01:50:03.183563 waagent[1755]: 2: eth0 inet6 fe80::222:48ff:fe7c:daa9/64 scope link \ valid_lft forever preferred_lft forever Jul 2 01:50:03.334910 waagent[1755]: 2024-07-02T01:50:03.334817Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.11.1.4 -- exiting Jul 2 01:50:04.343062 waagent[1673]: 2024-07-02T01:50:04.342920Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 2 01:50:04.347640 waagent[1673]: 2024-07-02T01:50:04.347564Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.11.1.4 to be the latest agent Jul 2 01:50:05.542374 waagent[1794]: 2024-07-02T01:50:05.542276Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.11.1.4) Jul 2 01:50:05.543466 waagent[1794]: 2024-07-02T01:50:05.543408Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.5 Jul 2 01:50:05.543711 waagent[1794]: 2024-07-02T01:50:05.543662Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 2 01:50:05.543928 waagent[1794]: 2024-07-02T01:50:05.543883Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 2 01:50:05.552052 waagent[1794]: 2024-07-02T01:50:05.551949Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 01:50:05.552569 waagent[1794]: 2024-07-02T01:50:05.552516Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:05.552851 waagent[1794]: 2024-07-02T01:50:05.552802Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:05.566358 waagent[1794]: 2024-07-02T01:50:05.566274Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 01:50:05.580395 waagent[1794]: 2024-07-02T01:50:05.580334Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 01:50:05.581665 waagent[1794]: 2024-07-02T01:50:05.581591Z INFO ExtHandler Jul 2 01:50:05.581904 waagent[1794]: 2024-07-02T01:50:05.581857Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e1f9a5ee-5d67-4bca-8249-69444245cbed eTag: 4783414783237914212 source: Fabric] Jul 2 01:50:05.582758 waagent[1794]: 2024-07-02T01:50:05.582702Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 01:50:05.584086 waagent[1794]: 2024-07-02T01:50:05.584028Z INFO ExtHandler Jul 2 01:50:05.584304 waagent[1794]: 2024-07-02T01:50:05.584257Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 01:50:05.590837 waagent[1794]: 2024-07-02T01:50:05.590789Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 01:50:05.591453 waagent[1794]: 2024-07-02T01:50:05.591408Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 01:50:05.610638 waagent[1794]: 2024-07-02T01:50:05.610557Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 2 01:50:05.681304 waagent[1794]: 2024-07-02T01:50:05.681176Z INFO ExtHandler Downloaded certificate {'thumbprint': '8FEE69FFBAFA37E44003AC684A2112A42F05A91B', 'hasPrivateKey': False} Jul 2 01:50:05.682534 waagent[1794]: 2024-07-02T01:50:05.682476Z INFO ExtHandler Downloaded certificate {'thumbprint': '7BB49B77886E69ECFE71A87ABD537418BE933108', 'hasPrivateKey': True} Jul 2 01:50:05.683717 waagent[1794]: 2024-07-02T01:50:05.683658Z INFO ExtHandler Fetch goal state completed Jul 2 01:50:05.705526 waagent[1794]: 2024-07-02T01:50:05.705436Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Jul 2 01:50:05.717524 waagent[1794]: 2024-07-02T01:50:05.717428Z INFO ExtHandler ExtHandler WALinuxAgent-2.11.1.4 running as process 1794 Jul 2 01:50:05.721357 waagent[1794]: 2024-07-02T01:50:05.721296Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 01:50:05.723020 waagent[1794]: 2024-07-02T01:50:05.722962Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 01:50:05.728091 waagent[1794]: 2024-07-02T01:50:05.728042Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 01:50:05.728588 waagent[1794]: 2024-07-02T01:50:05.728533Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 01:50:05.736468 waagent[1794]: 2024-07-02T01:50:05.736416Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 01:50:05.737107 waagent[1794]: 2024-07-02T01:50:05.737053Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 01:50:05.743578 waagent[1794]: 2024-07-02T01:50:05.743473Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 01:50:05.744831 waagent[1794]: 2024-07-02T01:50:05.744767Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 01:50:05.746588 waagent[1794]: 2024-07-02T01:50:05.746512Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 01:50:05.746999 waagent[1794]: 2024-07-02T01:50:05.746925Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:05.747476 waagent[1794]: 2024-07-02T01:50:05.747411Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:05.748128 waagent[1794]: 2024-07-02T01:50:05.748059Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 01:50:05.748442 waagent[1794]: 2024-07-02T01:50:05.748386Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 01:50:05.748442 waagent[1794]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 01:50:05.748442 waagent[1794]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 01:50:05.748442 waagent[1794]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 01:50:05.748442 waagent[1794]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:05.748442 waagent[1794]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:05.748442 waagent[1794]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:05.750876 waagent[1794]: 2024-07-02T01:50:05.750768Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 01:50:05.751487 waagent[1794]: 2024-07-02T01:50:05.751416Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:05.752642 waagent[1794]: 2024-07-02T01:50:05.751637Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:05.754538 waagent[1794]: 2024-07-02T01:50:05.754464Z INFO EnvHandler ExtHandler Configure routes Jul 2 01:50:05.754640 waagent[1794]: 2024-07-02T01:50:05.753503Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 01:50:05.754938 waagent[1794]: 2024-07-02T01:50:05.754879Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 01:50:05.755758 waagent[1794]: 2024-07-02T01:50:05.755694Z INFO EnvHandler ExtHandler Gateway:None Jul 2 01:50:05.758054 waagent[1794]: 2024-07-02T01:50:05.757921Z INFO EnvHandler ExtHandler Routes:None Jul 2 01:50:05.758472 waagent[1794]: 2024-07-02T01:50:05.758407Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 01:50:05.758712 waagent[1794]: 2024-07-02T01:50:05.758644Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 01:50:05.761589 waagent[1794]: 2024-07-02T01:50:05.761516Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 01:50:05.772379 waagent[1794]: 2024-07-02T01:50:05.772113Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 2 01:50:05.773467 waagent[1794]: 2024-07-02T01:50:05.773386Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 01:50:05.773467 waagent[1794]: Executing ['ip', '-a', '-o', 'link']: Jul 2 01:50:05.773467 waagent[1794]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 01:50:05.773467 waagent[1794]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:a9 brd ff:ff:ff:ff:ff:ff Jul 2 01:50:05.773467 waagent[1794]: 3: enP11825s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:a9 brd ff:ff:ff:ff:ff:ff\ altname enP11825p0s2 Jul 2 01:50:05.773467 waagent[1794]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 01:50:05.773467 waagent[1794]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 01:50:05.773467 waagent[1794]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 01:50:05.773467 waagent[1794]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 01:50:05.773467 waagent[1794]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 01:50:05.773467 waagent[1794]: 2: eth0 inet6 fe80::222:48ff:fe7c:daa9/64 scope link \ valid_lft forever preferred_lft forever Jul 2 01:50:05.792451 waagent[1794]: 2024-07-02T01:50:05.792314Z INFO ExtHandler ExtHandler Jul 2 01:50:05.794018 waagent[1794]: 2024-07-02T01:50:05.793942Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d4093f9e-f040-40b3-93e5-83cada4d3124 correlation 3ae297a9-4cae-4354-b023-0f6cc69579fd created: 2024-07-02T01:48:29.613747Z] Jul 2 01:50:05.797357 waagent[1794]: 2024-07-02T01:50:05.797272Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 01:50:05.801962 waagent[1794]: 2024-07-02T01:50:05.801849Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Jul 2 01:50:05.827382 waagent[1794]: 2024-07-02T01:50:05.827312Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 2 01:50:05.848482 waagent[1794]: 2024-07-02T01:50:05.848324Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.11.1.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AEC7350A-A09D-4C31-B6B8-FC77BF4E78BF;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Jul 2 01:50:05.940396 waagent[1794]: 2024-07-02T01:50:05.940277Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 2 01:50:05.940396 waagent[1794]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:05.940396 waagent[1794]: pkts bytes target prot opt in out source destination Jul 2 01:50:05.940396 waagent[1794]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:05.940396 waagent[1794]: pkts bytes target prot opt in out source destination Jul 2 01:50:05.940396 waagent[1794]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes) Jul 2 01:50:05.940396 waagent[1794]: pkts bytes target prot opt in out source destination Jul 2 01:50:05.940396 waagent[1794]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 01:50:05.940396 waagent[1794]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 01:50:05.940396 waagent[1794]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 01:50:05.948419 waagent[1794]: 2024-07-02T01:50:05.948311Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 01:50:05.948419 waagent[1794]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:05.948419 waagent[1794]: pkts bytes target prot opt in out source destination Jul 2 01:50:05.948419 waagent[1794]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:05.948419 waagent[1794]: pkts bytes target prot opt in out source destination Jul 2 01:50:05.948419 waagent[1794]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes) Jul 2 01:50:05.948419 waagent[1794]: pkts bytes target prot opt in out source destination Jul 2 01:50:05.948419 waagent[1794]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 01:50:05.948419 waagent[1794]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 01:50:05.948419 waagent[1794]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 01:50:05.949248 waagent[1794]: 2024-07-02T01:50:05.949203Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 01:50:12.007293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 01:50:12.007475 systemd[1]: Stopped kubelet.service. Jul 2 01:50:12.008918 systemd[1]: Starting kubelet.service... Jul 2 01:50:12.084623 systemd[1]: Started kubelet.service. Jul 2 01:50:12.126357 kubelet[1850]: E0702 01:50:12.126298 1850 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:12.128519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:12.128687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:22.257367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 01:50:22.257538 systemd[1]: Stopped kubelet.service. Jul 2 01:50:22.258974 systemd[1]: Starting kubelet.service... Jul 2 01:50:22.336478 systemd[1]: Started kubelet.service. Jul 2 01:50:22.386443 kubelet[1865]: E0702 01:50:22.386396 1865 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:22.388454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:22.388589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:28.153621 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 2 01:50:32.507402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 01:50:32.507556 systemd[1]: Stopped kubelet.service. Jul 2 01:50:32.509048 systemd[1]: Starting kubelet.service... Jul 2 01:50:32.590382 systemd[1]: Started kubelet.service. Jul 2 01:50:32.634408 kubelet[1880]: E0702 01:50:32.634363 1880 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:32.636339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:32.636476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:35.547713 update_engine[1545]: I0702 01:50:35.547659 1545 update_attempter.cc:509] Updating boot flags... Jul 2 01:50:36.458813 systemd[1]: Created slice system-sshd.slice. Jul 2 01:50:36.459957 systemd[1]: Started sshd@0-10.200.20.41:22-10.200.16.10:41526.service. Jul 2 01:50:37.109568 sshd[1926]: Accepted publickey for core from 10.200.16.10 port 41526 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:37.126754 sshd[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:37.130804 systemd-logind[1543]: New session 3 of user core. Jul 2 01:50:37.131228 systemd[1]: Started session-3.scope. Jul 2 01:50:37.527091 systemd[1]: Started sshd@1-10.200.20.41:22-10.200.16.10:41542.service. Jul 2 01:50:37.996390 sshd[1931]: Accepted publickey for core from 10.200.16.10 port 41542 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:37.998150 sshd[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:38.001834 systemd-logind[1543]: New session 4 of user core. Jul 2 01:50:38.002224 systemd[1]: Started session-4.scope. Jul 2 01:50:38.345191 sshd[1931]: pam_unix(sshd:session): session closed for user core Jul 2 01:50:38.347828 systemd[1]: sshd@1-10.200.20.41:22-10.200.16.10:41542.service: Deactivated successfully. Jul 2 01:50:38.348539 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 01:50:38.349497 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Jul 2 01:50:38.350261 systemd-logind[1543]: Removed session 4. Jul 2 01:50:38.420240 systemd[1]: Started sshd@2-10.200.20.41:22-10.200.16.10:32832.service. Jul 2 01:50:38.856379 sshd[1938]: Accepted publickey for core from 10.200.16.10 port 32832 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:38.857947 sshd[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:38.861561 systemd-logind[1543]: New session 5 of user core. Jul 2 01:50:38.861977 systemd[1]: Started session-5.scope. Jul 2 01:50:39.169153 sshd[1938]: pam_unix(sshd:session): session closed for user core Jul 2 01:50:39.172412 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Jul 2 01:50:39.172730 systemd[1]: sshd@2-10.200.20.41:22-10.200.16.10:32832.service: Deactivated successfully. Jul 2 01:50:39.173409 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 01:50:39.174417 systemd-logind[1543]: Removed session 5. Jul 2 01:50:39.239083 systemd[1]: Started sshd@3-10.200.20.41:22-10.200.16.10:32836.service. Jul 2 01:50:39.668663 sshd[1945]: Accepted publickey for core from 10.200.16.10 port 32836 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:39.670311 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:39.674172 systemd-logind[1543]: New session 6 of user core. Jul 2 01:50:39.674655 systemd[1]: Started session-6.scope. Jul 2 01:50:39.982737 sshd[1945]: pam_unix(sshd:session): session closed for user core Jul 2 01:50:39.986216 systemd[1]: sshd@3-10.200.20.41:22-10.200.16.10:32836.service: Deactivated successfully. Jul 2 01:50:39.987147 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Jul 2 01:50:39.987199 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 01:50:39.988259 systemd-logind[1543]: Removed session 6. Jul 2 01:50:40.058763 systemd[1]: Started sshd@4-10.200.20.41:22-10.200.16.10:32848.service. Jul 2 01:50:40.524096 sshd[1952]: Accepted publickey for core from 10.200.16.10 port 32848 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:40.525344 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:40.529279 systemd-logind[1543]: New session 7 of user core. Jul 2 01:50:40.529728 systemd[1]: Started session-7.scope. Jul 2 01:50:41.042210 sudo[1959]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 01:50:41.042431 sudo[1959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 01:50:41.075948 systemd[1]: Starting docker.service... Jul 2 01:50:41.108012 env[1969]: time="2024-07-02T01:50:41.107966243Z" level=info msg="Starting up" Jul 2 01:50:41.109503 env[1969]: time="2024-07-02T01:50:41.109477824Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 01:50:41.109503 env[1969]: time="2024-07-02T01:50:41.109497785Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 01:50:41.109625 env[1969]: time="2024-07-02T01:50:41.109517985Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 01:50:41.109625 env[1969]: time="2024-07-02T01:50:41.109527945Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 01:50:41.111292 env[1969]: time="2024-07-02T01:50:41.111271090Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 01:50:41.111379 env[1969]: time="2024-07-02T01:50:41.111366291Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 01:50:41.111455 env[1969]: time="2024-07-02T01:50:41.111432292Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 01:50:41.111506 env[1969]: time="2024-07-02T01:50:41.111494773Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 01:50:41.119784 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2246143543-merged.mount: Deactivated successfully. Jul 2 01:50:41.208770 env[1969]: time="2024-07-02T01:50:41.208733945Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 01:50:41.208952 env[1969]: time="2024-07-02T01:50:41.208938868Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 01:50:41.209156 env[1969]: time="2024-07-02T01:50:41.209143711Z" level=info msg="Loading containers: start." Jul 2 01:50:41.356631 kernel: Initializing XFRM netlink socket Jul 2 01:50:41.379389 env[1969]: time="2024-07-02T01:50:41.379340953Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 01:50:41.491130 systemd-networkd[1730]: docker0: Link UP Jul 2 01:50:41.508127 env[1969]: time="2024-07-02T01:50:41.508096090Z" level=info msg="Loading containers: done." Jul 2 01:50:41.519493 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck18800116-merged.mount: Deactivated successfully. Jul 2 01:50:41.531594 env[1969]: time="2024-07-02T01:50:41.531554861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 01:50:41.531925 env[1969]: time="2024-07-02T01:50:41.531908706Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 01:50:41.532091 env[1969]: time="2024-07-02T01:50:41.532078029Z" level=info msg="Daemon has completed initialization" Jul 2 01:50:41.564244 systemd[1]: Started docker.service. Jul 2 01:50:41.570783 env[1969]: time="2024-07-02T01:50:41.570696374Z" level=info msg="API listen on /run/docker.sock" Jul 2 01:50:42.757295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 01:50:42.757465 systemd[1]: Stopped kubelet.service. Jul 2 01:50:42.758973 systemd[1]: Starting kubelet.service... Jul 2 01:50:42.862854 systemd[1]: Started kubelet.service. Jul 2 01:50:42.911709 kubelet[2091]: E0702 01:50:42.911660 2091 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:42.913615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:42.913771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:45.890666 env[1557]: time="2024-07-02T01:50:45.890621948Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 01:50:46.787529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560452883.mount: Deactivated successfully. Jul 2 01:50:48.445209 env[1557]: time="2024-07-02T01:50:48.445162858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.451784 env[1557]: time="2024-07-02T01:50:48.451742834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.454794 env[1557]: time="2024-07-02T01:50:48.454749199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.458917 env[1557]: time="2024-07-02T01:50:48.458879143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.459726 env[1557]: time="2024-07-02T01:50:48.459695234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 01:50:48.468873 env[1557]: time="2024-07-02T01:50:48.468837892Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 01:50:50.732820 env[1557]: time="2024-07-02T01:50:50.732775681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.738765 env[1557]: time="2024-07-02T01:50:50.738720776Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.745149 env[1557]: time="2024-07-02T01:50:50.745111008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.751039 env[1557]: time="2024-07-02T01:50:50.751003494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.751852 env[1557]: time="2024-07-02T01:50:50.751819933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 01:50:50.761701 env[1557]: time="2024-07-02T01:50:50.761652096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 01:50:52.377837 env[1557]: time="2024-07-02T01:50:52.377789066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.383087 env[1557]: time="2024-07-02T01:50:52.383044616Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.388424 env[1557]: time="2024-07-02T01:50:52.388381281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.392417 env[1557]: time="2024-07-02T01:50:52.392386727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.393229 env[1557]: time="2024-07-02T01:50:52.393201233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 01:50:52.402474 env[1557]: time="2024-07-02T01:50:52.402442561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 01:50:53.007349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 01:50:53.007514 systemd[1]: Stopped kubelet.service. Jul 2 01:50:53.009024 systemd[1]: Starting kubelet.service... Jul 2 01:50:53.088268 systemd[1]: Started kubelet.service. Jul 2 01:50:53.134522 kubelet[2128]: E0702 01:50:53.134477 2128 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:53.136389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:53.136521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:54.090706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422757609.mount: Deactivated successfully. Jul 2 01:50:54.823809 env[1557]: time="2024-07-02T01:50:54.823759621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.831019 env[1557]: time="2024-07-02T01:50:54.830930318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.835911 env[1557]: time="2024-07-02T01:50:54.835867074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.839544 env[1557]: time="2024-07-02T01:50:54.839515190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.840020 env[1557]: time="2024-07-02T01:50:54.839994897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 01:50:54.849085 env[1557]: time="2024-07-02T01:50:54.849045003Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 01:50:55.416804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3680691258.mount: Deactivated successfully. Jul 2 01:50:55.444850 env[1557]: time="2024-07-02T01:50:55.444794976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.453264 env[1557]: time="2024-07-02T01:50:55.453213056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.458371 env[1557]: time="2024-07-02T01:50:55.458335068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.464183 env[1557]: time="2024-07-02T01:50:55.464141528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.464968 env[1557]: time="2024-07-02T01:50:55.464941558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 01:50:55.475331 env[1557]: time="2024-07-02T01:50:55.475291047Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 01:50:56.180856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245396060.mount: Deactivated successfully. Jul 2 01:51:01.248067 env[1557]: time="2024-07-02T01:51:01.248014246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:01.299079 env[1557]: time="2024-07-02T01:51:01.299032594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:01.341963 env[1557]: time="2024-07-02T01:51:01.341923967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:01.404370 env[1557]: time="2024-07-02T01:51:01.404323032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:01.404730 env[1557]: time="2024-07-02T01:51:01.404703666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 01:51:01.413133 env[1557]: time="2024-07-02T01:51:01.413103522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 01:51:02.610100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494769514.mount: Deactivated successfully. Jul 2 01:51:03.257314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 2 01:51:03.257492 systemd[1]: Stopped kubelet.service. Jul 2 01:51:03.259010 systemd[1]: Starting kubelet.service... Jul 2 01:51:03.339798 systemd[1]: Started kubelet.service. Jul 2 01:51:03.382131 kubelet[2159]: E0702 01:51:03.382070 2159 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:51:03.384050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:51:03.384196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:51:07.190838 env[1557]: time="2024-07-02T01:51:07.190795116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:07.294332 env[1557]: time="2024-07-02T01:51:07.294265054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:07.340276 env[1557]: time="2024-07-02T01:51:07.340225134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:07.406007 env[1557]: time="2024-07-02T01:51:07.405947893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:07.406771 env[1557]: time="2024-07-02T01:51:07.406736319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 01:51:12.411282 systemd[1]: Stopped kubelet.service. Jul 2 01:51:12.413746 systemd[1]: Starting kubelet.service... Jul 2 01:51:12.442110 systemd[1]: Reloading. Jul 2 01:51:12.515797 /usr/lib/systemd/system-generators/torcx-generator[2253]: time="2024-07-02T01:51:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:51:12.516033 /usr/lib/systemd/system-generators/torcx-generator[2253]: time="2024-07-02T01:51:12Z" level=info msg="torcx already run" Jul 2 01:51:12.595811 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:51:12.595970 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:51:12.613233 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:51:12.698878 systemd[1]: Started kubelet.service. Jul 2 01:51:12.702331 systemd[1]: Stopping kubelet.service... Jul 2 01:51:12.703267 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 01:51:12.703635 systemd[1]: Stopped kubelet.service. Jul 2 01:51:12.705976 systemd[1]: Starting kubelet.service... Jul 2 01:51:15.499921 systemd[1]: Started kubelet.service. Jul 2 01:51:15.546372 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:15.546372 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 01:51:15.546372 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:15.546749 kubelet[2333]: I0702 01:51:15.546431 2333 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 01:51:16.556217 kubelet[2333]: I0702 01:51:16.556189 2333 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 01:51:16.556571 kubelet[2333]: I0702 01:51:16.556557 2333 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 01:51:16.556866 kubelet[2333]: I0702 01:51:16.556850 2333 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 01:51:16.573068 kubelet[2333]: E0702 01:51:16.573048 2333 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.573240 kubelet[2333]: I0702 01:51:16.573229 2333 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 01:51:16.580958 kubelet[2333]: W0702 01:51:16.580935 2333 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 01:51:16.581485 kubelet[2333]: I0702 01:51:16.581468 2333 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 01:51:16.581824 kubelet[2333]: I0702 01:51:16.581808 2333 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 01:51:16.581983 kubelet[2333]: I0702 01:51:16.581967 2333 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 01:51:16.582070 kubelet[2333]: I0702 01:51:16.581994 2333 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 01:51:16.582070 kubelet[2333]: I0702 01:51:16.582002 2333 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 01:51:16.582122 kubelet[2333]: I0702 01:51:16.582094 2333 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:16.590409 kubelet[2333]: W0702 01:51:16.590371 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-637f296955&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.590511 kubelet[2333]: E0702 01:51:16.590501 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-637f296955&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.590715 kubelet[2333]: I0702 01:51:16.590695 2333 kubelet.go:393] "Attempting to sync node with API server" Jul 2 01:51:16.590763 kubelet[2333]: I0702 01:51:16.590722 2333 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 01:51:16.590763 kubelet[2333]: I0702 01:51:16.590752 2333 kubelet.go:309] "Adding apiserver pod source" Jul 2 01:51:16.590763 kubelet[2333]: I0702 01:51:16.590763 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 01:51:16.592736 kubelet[2333]: I0702 01:51:16.592709 2333 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 01:51:16.594444 kubelet[2333]: W0702 01:51:16.594397 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.594444 kubelet[2333]: E0702 01:51:16.594445 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.596264 kubelet[2333]: W0702 01:51:16.596226 2333 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 01:51:16.602624 kubelet[2333]: I0702 01:51:16.602575 2333 server.go:1232] "Started kubelet" Jul 2 01:51:16.603020 kubelet[2333]: I0702 01:51:16.603003 2333 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 01:51:16.603776 kubelet[2333]: I0702 01:51:16.603759 2333 server.go:462] "Adding debug handlers to kubelet server" Jul 2 01:51:16.604853 kubelet[2333]: I0702 01:51:16.604813 2333 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 01:51:16.605078 kubelet[2333]: I0702 01:51:16.605053 2333 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 01:51:16.605311 kubelet[2333]: E0702 01:51:16.605224 2333 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.5-a-637f296955.17de425d656402b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.5-a-637f296955", UID:"ci-3510.3.5-a-637f296955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-637f296955"}, FirstTimestamp:time.Date(2024, time.July, 2, 1, 51, 16, 602553011, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 1, 51, 16, 602553011, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-637f296955"}': 'Post "https://10.200.20.41:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.41:6443: connect: connection refused'(may retry after sleeping) Jul 2 01:51:16.613801 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 01:51:16.613870 kubelet[2333]: E0702 01:51:16.607108 2333 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 01:51:16.613870 kubelet[2333]: E0702 01:51:16.607130 2333 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 01:51:16.614099 kubelet[2333]: I0702 01:51:16.614084 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 01:51:16.616100 kubelet[2333]: E0702 01:51:16.615708 2333 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-637f296955\" not found" Jul 2 01:51:16.616100 kubelet[2333]: I0702 01:51:16.615923 2333 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 01:51:16.616100 kubelet[2333]: I0702 01:51:16.616028 2333 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 01:51:16.616227 kubelet[2333]: I0702 01:51:16.616107 2333 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 01:51:16.616411 kubelet[2333]: W0702 01:51:16.616364 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.616411 kubelet[2333]: E0702 01:51:16.616410 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:16.617561 kubelet[2333]: E0702 01:51:16.617522 2333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="200ms" Jul 2 01:51:16.687510 kubelet[2333]: I0702 01:51:16.687489 2333 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 01:51:16.687681 kubelet[2333]: I0702 01:51:16.687671 2333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 01:51:16.687756 kubelet[2333]: I0702 01:51:16.687749 2333 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:16.717572 kubelet[2333]: I0702 01:51:16.717551 2333 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:16.718081 kubelet[2333]: E0702 01:51:16.718057 2333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:16.819865 kubelet[2333]: E0702 01:51:16.818742 2333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="400ms" Jul 2 01:51:16.920395 kubelet[2333]: I0702 01:51:16.920376 2333 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:16.920861 kubelet[2333]: E0702 01:51:16.920847 2333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:18.636545 kubelet[2333]: E0702 01:51:17.220125 2333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="800ms" Jul 2 01:51:18.636545 kubelet[2333]: I0702 01:51:17.322871 2333 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:18.636545 kubelet[2333]: E0702 01:51:17.323166 2333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:18.636545 kubelet[2333]: W0702 01:51:17.482866 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.636545 kubelet[2333]: E0702 01:51:17.482920 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.636545 kubelet[2333]: W0702 01:51:17.841186 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-637f296955&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.636545 kubelet[2333]: E0702 01:51:17.841234 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-637f296955&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.637008 kubelet[2333]: W0702 01:51:17.990897 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.637008 kubelet[2333]: E0702 01:51:17.990921 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.637008 kubelet[2333]: E0702 01:51:18.021325 2333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="1.6s" Jul 2 01:51:18.637008 kubelet[2333]: I0702 01:51:18.125084 2333 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:18.637008 kubelet[2333]: E0702 01:51:18.125350 2333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:18.681681 kubelet[2333]: E0702 01:51:18.681657 2333 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.696501 kubelet[2333]: I0702 01:51:18.696474 2333 policy_none.go:49] "None policy: Start" Jul 2 01:51:18.697359 kubelet[2333]: I0702 01:51:18.697344 2333 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 01:51:18.697472 kubelet[2333]: I0702 01:51:18.697463 2333 state_mem.go:35] "Initializing new in-memory state store" Jul 2 01:51:18.704542 kubelet[2333]: I0702 01:51:18.704520 2333 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 01:51:18.708339 kubelet[2333]: I0702 01:51:18.708306 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 01:51:18.711161 kubelet[2333]: E0702 01:51:18.711133 2333 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-637f296955\" not found" Jul 2 01:51:18.740521 kubelet[2333]: I0702 01:51:18.740493 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 01:51:18.741742 kubelet[2333]: I0702 01:51:18.741721 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 01:51:18.741858 kubelet[2333]: I0702 01:51:18.741849 2333 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 01:51:18.742066 kubelet[2333]: I0702 01:51:18.742055 2333 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 01:51:18.742221 kubelet[2333]: E0702 01:51:18.742204 2333 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 01:51:18.743119 kubelet[2333]: W0702 01:51:18.743086 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.743216 kubelet[2333]: E0702 01:51:18.743127 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:18.842850 kubelet[2333]: I0702 01:51:18.842816 2333 topology_manager.go:215] "Topology Admit Handler" podUID="492ddebccae5acac1deb05d137a923c3" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.844440 kubelet[2333]: I0702 01:51:18.844417 2333 topology_manager.go:215] "Topology Admit Handler" podUID="6ce023f629119284d408e390cdb28145" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.848139 kubelet[2333]: I0702 01:51:18.848118 2333 topology_manager.go:215] "Topology Admit Handler" podUID="48528d3091784103fe477d2561eed7f1" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928513 kubelet[2333]: I0702 01:51:18.927722 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928718 kubelet[2333]: I0702 01:51:18.928689 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48528d3091784103fe477d2561eed7f1-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-637f296955\" (UID: \"48528d3091784103fe477d2561eed7f1\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928774 kubelet[2333]: I0702 01:51:18.928763 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48528d3091784103fe477d2561eed7f1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-637f296955\" (UID: \"48528d3091784103fe477d2561eed7f1\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928803 kubelet[2333]: I0702 01:51:18.928785 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928830 kubelet[2333]: I0702 01:51:18.928819 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928854 kubelet[2333]: I0702 01:51:18.928841 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928904 kubelet[2333]: I0702 01:51:18.928887 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928997 kubelet[2333]: I0702 01:51:18.928912 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ce023f629119284d408e390cdb28145-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-637f296955\" (UID: \"6ce023f629119284d408e390cdb28145\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-637f296955" Jul 2 01:51:18.928997 kubelet[2333]: I0702 01:51:18.928932 2333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48528d3091784103fe477d2561eed7f1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-637f296955\" (UID: \"48528d3091784103fe477d2561eed7f1\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:19.148675 env[1557]: time="2024-07-02T01:51:19.148631142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-637f296955,Uid:492ddebccae5acac1deb05d137a923c3,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:19.153961 env[1557]: time="2024-07-02T01:51:19.153733372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-637f296955,Uid:6ce023f629119284d408e390cdb28145,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:19.154718 env[1557]: time="2024-07-02T01:51:19.154686789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-637f296955,Uid:48528d3091784103fe477d2561eed7f1,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:19.160439 kubelet[2333]: W0702 01:51:19.160388 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:19.160503 kubelet[2333]: E0702 01:51:19.160453 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:19.622011 kubelet[2333]: E0702 01:51:19.621983 2333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="3.2s" Jul 2 01:51:19.726824 kubelet[2333]: I0702 01:51:19.726796 2333 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:19.727149 kubelet[2333]: E0702 01:51:19.727081 2333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:19.792853 kubelet[2333]: W0702 01:51:19.792823 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:19.792853 kubelet[2333]: E0702 01:51:19.792857 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:19.822869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654025986.mount: Deactivated successfully. Jul 2 01:51:19.854649 env[1557]: time="2024-07-02T01:51:19.854611845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.860270 env[1557]: time="2024-07-02T01:51:19.860246087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.874204 env[1557]: time="2024-07-02T01:51:19.873781586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.879179 env[1557]: time="2024-07-02T01:51:19.879151982Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.883386 env[1557]: time="2024-07-02T01:51:19.883347822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.887710 env[1557]: time="2024-07-02T01:51:19.887685223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.891374 env[1557]: time="2024-07-02T01:51:19.891336094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.896454 env[1557]: time="2024-07-02T01:51:19.896424048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.904262 env[1557]: time="2024-07-02T01:51:19.904233089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.908656 env[1557]: time="2024-07-02T01:51:19.908627315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.922875 env[1557]: time="2024-07-02T01:51:19.922846544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.945766 env[1557]: time="2024-07-02T01:51:19.945723061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:19.989122 env[1557]: time="2024-07-02T01:51:19.989056843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:19.989435 env[1557]: time="2024-07-02T01:51:19.989347363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:19.989435 env[1557]: time="2024-07-02T01:51:19.989363119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:19.989720 env[1557]: time="2024-07-02T01:51:19.989663796Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fa96a9764a2bc7b2432634670f83d3e78712988a8989654e3326a46bedca7e9 pid=2371 runtime=io.containerd.runc.v2 Jul 2 01:51:20.006393 kubelet[2333]: W0702 01:51:20.006164 2333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:20.006393 kubelet[2333]: E0702 01:51:20.006316 2333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Jul 2 01:51:20.024148 env[1557]: time="2024-07-02T01:51:20.020116498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:20.024148 env[1557]: time="2024-07-02T01:51:20.020151449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:20.024148 env[1557]: time="2024-07-02T01:51:20.020175802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:20.024148 env[1557]: time="2024-07-02T01:51:20.020293210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93d17d19900d077d7736a481f4fc1a4d11c40ccd05878fdadc6c21579924235c pid=2408 runtime=io.containerd.runc.v2 Jul 2 01:51:20.033707 env[1557]: time="2024-07-02T01:51:20.033594420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:20.033850 env[1557]: time="2024-07-02T01:51:20.033714348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:20.033850 env[1557]: time="2024-07-02T01:51:20.033741060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:20.034050 env[1557]: time="2024-07-02T01:51:20.034002670Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7048dc359e5221d79fedfa21563b9e3038763a79eeaa791eadf66db01dca9b13 pid=2430 runtime=io.containerd.runc.v2 Jul 2 01:51:20.066194 env[1557]: time="2024-07-02T01:51:20.066136276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-637f296955,Uid:6ce023f629119284d408e390cdb28145,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fa96a9764a2bc7b2432634670f83d3e78712988a8989654e3326a46bedca7e9\"" Jul 2 01:51:20.071722 env[1557]: time="2024-07-02T01:51:20.071678820Z" level=info msg="CreateContainer within sandbox \"5fa96a9764a2bc7b2432634670f83d3e78712988a8989654e3326a46bedca7e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 01:51:20.092413 env[1557]: time="2024-07-02T01:51:20.092305612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-637f296955,Uid:48528d3091784103fe477d2561eed7f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7048dc359e5221d79fedfa21563b9e3038763a79eeaa791eadf66db01dca9b13\"" Jul 2 01:51:20.096656 env[1557]: time="2024-07-02T01:51:20.096622007Z" level=info msg="CreateContainer within sandbox \"7048dc359e5221d79fedfa21563b9e3038763a79eeaa791eadf66db01dca9b13\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 01:51:20.107619 env[1557]: time="2024-07-02T01:51:20.107545218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-637f296955,Uid:492ddebccae5acac1deb05d137a923c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"93d17d19900d077d7736a481f4fc1a4d11c40ccd05878fdadc6c21579924235c\"" Jul 2 01:51:20.110393 env[1557]: time="2024-07-02T01:51:20.110359139Z" level=info msg="CreateContainer within sandbox \"93d17d19900d077d7736a481f4fc1a4d11c40ccd05878fdadc6c21579924235c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 01:51:20.121346 env[1557]: time="2024-07-02T01:51:20.121285229Z" level=info msg="CreateContainer within sandbox \"5fa96a9764a2bc7b2432634670f83d3e78712988a8989654e3326a46bedca7e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6\"" Jul 2 01:51:20.121978 env[1557]: time="2024-07-02T01:51:20.121954889Z" level=info msg="StartContainer for \"17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6\"" Jul 2 01:51:20.156923 env[1557]: time="2024-07-02T01:51:20.153796694Z" level=info msg="CreateContainer within sandbox \"7048dc359e5221d79fedfa21563b9e3038763a79eeaa791eadf66db01dca9b13\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"706be7f58f319f158e66b29f2c168bcab4822cb744a6dddf42f356cbbd1fab71\"" Jul 2 01:51:20.159721 env[1557]: time="2024-07-02T01:51:20.157232806Z" level=info msg="StartContainer for \"706be7f58f319f158e66b29f2c168bcab4822cb744a6dddf42f356cbbd1fab71\"" Jul 2 01:51:20.173896 env[1557]: time="2024-07-02T01:51:20.173853800Z" level=info msg="CreateContainer within sandbox \"93d17d19900d077d7736a481f4fc1a4d11c40ccd05878fdadc6c21579924235c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc\"" Jul 2 01:51:20.174381 env[1557]: time="2024-07-02T01:51:20.174352305Z" level=info msg="StartContainer for \"5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc\"" Jul 2 01:51:20.190772 env[1557]: time="2024-07-02T01:51:20.190726245Z" level=info msg="StartContainer for \"17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6\" returns successfully" Jul 2 01:51:20.257933 env[1557]: time="2024-07-02T01:51:20.257892075Z" level=info msg="StartContainer for \"706be7f58f319f158e66b29f2c168bcab4822cb744a6dddf42f356cbbd1fab71\" returns successfully" Jul 2 01:51:20.259920 env[1557]: time="2024-07-02T01:51:20.259887256Z" level=info msg="StartContainer for \"5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc\" returns successfully" Jul 2 01:51:22.560261 kubelet[2333]: E0702 01:51:22.560229 2333 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.5-a-637f296955" not found Jul 2 01:51:22.595191 kubelet[2333]: I0702 01:51:22.595153 2333 apiserver.go:52] "Watching apiserver" Jul 2 01:51:22.616253 kubelet[2333]: I0702 01:51:22.616220 2333 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 01:51:22.832436 kubelet[2333]: E0702 01:51:22.832340 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-637f296955\" not found" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:22.928569 kubelet[2333]: I0702 01:51:22.928547 2333 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:22.933419 kubelet[2333]: I0702 01:51:22.933380 2333 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:23.246720 kubelet[2333]: W0702 01:51:23.246629 2333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:25.159926 systemd[1]: Reloading. Jul 2 01:51:25.278224 /usr/lib/systemd/system-generators/torcx-generator[2627]: time="2024-07-02T01:51:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:51:25.279731 /usr/lib/systemd/system-generators/torcx-generator[2627]: time="2024-07-02T01:51:25Z" level=info msg="torcx already run" Jul 2 01:51:25.398177 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:51:25.398358 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:51:25.416157 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:51:25.521083 kubelet[2333]: I0702 01:51:25.521052 2333 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 01:51:25.523491 systemd[1]: Stopping kubelet.service... Jul 2 01:51:25.538073 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 01:51:25.538537 systemd[1]: Stopped kubelet.service. Jul 2 01:51:25.540354 systemd[1]: Starting kubelet.service... Jul 2 01:51:25.620923 systemd[1]: Started kubelet.service. Jul 2 01:51:25.709987 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:25.709987 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 01:51:25.709987 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:25.710377 kubelet[2702]: I0702 01:51:25.709963 2702 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 01:51:25.714427 kubelet[2702]: I0702 01:51:25.714409 2702 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 01:51:25.714537 kubelet[2702]: I0702 01:51:25.714527 2702 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 01:51:25.714790 kubelet[2702]: I0702 01:51:25.714777 2702 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 01:51:25.716464 kubelet[2702]: I0702 01:51:25.716446 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 01:51:25.717697 kubelet[2702]: I0702 01:51:25.717680 2702 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 01:51:25.725576 kubelet[2702]: W0702 01:51:25.725558 2702 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 01:51:25.726329 kubelet[2702]: I0702 01:51:25.726313 2702 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 01:51:25.726835 kubelet[2702]: I0702 01:51:25.726821 2702 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 01:51:25.727070 kubelet[2702]: I0702 01:51:25.727054 2702 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 01:51:25.727199 kubelet[2702]: I0702 01:51:25.727188 2702 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 01:51:25.727264 kubelet[2702]: I0702 01:51:25.727255 2702 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 01:51:25.727345 kubelet[2702]: I0702 01:51:25.727336 2702 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:25.727483 kubelet[2702]: I0702 01:51:25.727473 2702 kubelet.go:393] "Attempting to sync node with API server" Jul 2 01:51:25.727554 kubelet[2702]: I0702 01:51:25.727544 2702 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 01:51:25.727656 kubelet[2702]: I0702 01:51:25.727645 2702 kubelet.go:309] "Adding apiserver pod source" Jul 2 01:51:25.727731 kubelet[2702]: I0702 01:51:25.727722 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 01:51:25.732984 kubelet[2702]: I0702 01:51:25.732964 2702 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 01:51:25.733623 kubelet[2702]: I0702 01:51:25.733607 2702 server.go:1232] "Started kubelet" Jul 2 01:51:25.735204 kubelet[2702]: I0702 01:51:25.735187 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 01:51:25.743299 kubelet[2702]: E0702 01:51:25.743280 2702 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 01:51:25.743428 kubelet[2702]: E0702 01:51:25.743418 2702 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 01:51:25.746101 kubelet[2702]: I0702 01:51:25.746085 2702 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 01:51:25.746916 kubelet[2702]: I0702 01:51:25.746900 2702 server.go:462] "Adding debug handlers to kubelet server" Jul 2 01:51:25.748000 kubelet[2702]: I0702 01:51:25.747983 2702 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 01:51:25.748239 kubelet[2702]: I0702 01:51:25.748227 2702 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 01:51:25.755873 kubelet[2702]: I0702 01:51:25.755856 2702 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 01:51:25.756320 kubelet[2702]: I0702 01:51:25.756301 2702 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 01:51:25.756517 kubelet[2702]: I0702 01:51:25.756507 2702 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 01:51:25.764443 kubelet[2702]: I0702 01:51:25.764427 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 01:51:25.765284 kubelet[2702]: I0702 01:51:25.765271 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 01:51:25.765384 kubelet[2702]: I0702 01:51:25.765374 2702 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 01:51:25.765451 kubelet[2702]: I0702 01:51:25.765442 2702 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 01:51:25.765551 kubelet[2702]: E0702 01:51:25.765541 2702 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 01:51:25.844136 kubelet[2702]: I0702 01:51:25.844111 2702 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 01:51:25.844286 kubelet[2702]: I0702 01:51:25.844276 2702 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 01:51:25.844349 kubelet[2702]: I0702 01:51:25.844341 2702 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:25.844552 kubelet[2702]: I0702 01:51:25.844542 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 01:51:25.844711 kubelet[2702]: I0702 01:51:25.844701 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 01:51:25.844771 kubelet[2702]: I0702 01:51:25.844763 2702 policy_none.go:49] "None policy: Start" Jul 2 01:51:25.845329 kubelet[2702]: I0702 01:51:25.845317 2702 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 01:51:25.845417 kubelet[2702]: I0702 01:51:25.845408 2702 state_mem.go:35] "Initializing new in-memory state store" Jul 2 01:51:25.845645 kubelet[2702]: I0702 01:51:25.845633 2702 state_mem.go:75] "Updated machine memory state" Jul 2 01:51:25.846753 kubelet[2702]: I0702 01:51:25.846739 2702 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 01:51:25.847037 kubelet[2702]: I0702 01:51:25.847025 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 01:51:25.861277 kubelet[2702]: I0702 01:51:25.861257 2702 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:25.866538 kubelet[2702]: I0702 01:51:25.866507 2702 topology_manager.go:215] "Topology Admit Handler" podUID="48528d3091784103fe477d2561eed7f1" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:25.866704 kubelet[2702]: I0702 01:51:25.866607 2702 topology_manager.go:215] "Topology Admit Handler" podUID="492ddebccae5acac1deb05d137a923c3" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:25.866704 kubelet[2702]: I0702 01:51:25.866657 2702 topology_manager.go:215] "Topology Admit Handler" podUID="6ce023f629119284d408e390cdb28145" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-637f296955" Jul 2 01:51:25.872577 kubelet[2702]: W0702 01:51:25.872562 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:25.875161 kubelet[2702]: W0702 01:51:25.874969 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:25.877724 kubelet[2702]: I0702 01:51:25.877707 2702 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:25.877873 kubelet[2702]: I0702 01:51:25.877863 2702 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-637f296955" Jul 2 01:51:25.879855 kubelet[2702]: W0702 01:51:25.879840 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:25.880004 kubelet[2702]: E0702 01:51:25.879992 2702 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-637f296955\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.021753 sudo[2732]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 01:51:26.021969 sudo[2732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 01:51:26.057442 kubelet[2702]: I0702 01:51:26.057409 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.057662 kubelet[2702]: I0702 01:51:26.057650 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.057824 kubelet[2702]: I0702 01:51:26.057812 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.057915 kubelet[2702]: I0702 01:51:26.057906 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.058010 kubelet[2702]: I0702 01:51:26.058001 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48528d3091784103fe477d2561eed7f1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-637f296955\" (UID: \"48528d3091784103fe477d2561eed7f1\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.058088 kubelet[2702]: I0702 01:51:26.058080 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48528d3091784103fe477d2561eed7f1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-637f296955\" (UID: \"48528d3091784103fe477d2561eed7f1\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.058163 kubelet[2702]: I0702 01:51:26.058154 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/492ddebccae5acac1deb05d137a923c3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-637f296955\" (UID: \"492ddebccae5acac1deb05d137a923c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.058228 kubelet[2702]: I0702 01:51:26.058220 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ce023f629119284d408e390cdb28145-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-637f296955\" (UID: \"6ce023f629119284d408e390cdb28145\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.058309 kubelet[2702]: I0702 01:51:26.058300 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48528d3091784103fe477d2561eed7f1-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-637f296955\" (UID: \"48528d3091784103fe477d2561eed7f1\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.473367 sudo[2732]: pam_unix(sudo:session): session closed for user root Jul 2 01:51:26.729324 kubelet[2702]: I0702 01:51:26.729232 2702 apiserver.go:52] "Watching apiserver" Jul 2 01:51:26.757111 kubelet[2702]: I0702 01:51:26.757079 2702 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 01:51:26.812736 kubelet[2702]: W0702 01:51:26.812714 2702 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:26.812942 kubelet[2702]: E0702 01:51:26.812930 2702 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-637f296955\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" Jul 2 01:51:26.824157 kubelet[2702]: I0702 01:51:26.824128 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-637f296955" podStartSLOduration=1.824073147 podCreationTimestamp="2024-07-02 01:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:26.816860442 +0000 UTC m=+1.186954620" watchObservedRunningTime="2024-07-02 01:51:26.824073147 +0000 UTC m=+1.194167365" Jul 2 01:51:26.824406 kubelet[2702]: I0702 01:51:26.824392 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-637f296955" podStartSLOduration=1.824371476 podCreationTimestamp="2024-07-02 01:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:26.823712871 +0000 UTC m=+1.193807089" watchObservedRunningTime="2024-07-02 01:51:26.824371476 +0000 UTC m=+1.194465694" Jul 2 01:51:26.844527 kubelet[2702]: I0702 01:51:26.844488 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" podStartSLOduration=3.844411125 podCreationTimestamp="2024-07-02 01:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:26.833614503 +0000 UTC m=+1.203708721" watchObservedRunningTime="2024-07-02 01:51:26.844411125 +0000 UTC m=+1.214505343" Jul 2 01:51:28.205737 sudo[1959]: pam_unix(sudo:session): session closed for user root Jul 2 01:51:28.291066 sshd[1952]: pam_unix(sshd:session): session closed for user core Jul 2 01:51:28.293518 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Jul 2 01:51:28.293830 systemd[1]: sshd@4-10.200.20.41:22-10.200.16.10:32848.service: Deactivated successfully. Jul 2 01:51:28.294641 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 01:51:28.295851 systemd-logind[1543]: Removed session 7. Jul 2 01:51:40.429760 kubelet[2702]: I0702 01:51:40.429736 2702 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 01:51:40.430575 env[1557]: time="2024-07-02T01:51:40.430544893Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 01:51:40.431114 kubelet[2702]: I0702 01:51:40.431100 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 01:51:40.437043 kubelet[2702]: I0702 01:51:40.437007 2702 topology_manager.go:215] "Topology Admit Handler" podUID="b284655e-070b-4930-9815-b1f4245bc8b9" podNamespace="kube-system" podName="kube-proxy-glrq5" Jul 2 01:51:40.463543 kubelet[2702]: I0702 01:51:40.463485 2702 topology_manager.go:215] "Topology Admit Handler" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" podNamespace="kube-system" podName="cilium-xssjf" Jul 2 01:51:40.476312 kubelet[2702]: W0702 01:51:40.476284 2702 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.5-a-637f296955" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-637f296955' and this object Jul 2 01:51:40.476488 kubelet[2702]: E0702 01:51:40.476474 2702 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.5-a-637f296955" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-637f296955' and this object Jul 2 01:51:40.624672 kubelet[2702]: I0702 01:51:40.624637 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pmgn\" (UniqueName: \"kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-kube-api-access-4pmgn\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624825 kubelet[2702]: I0702 01:51:40.624698 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-hostproc\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624825 kubelet[2702]: I0702 01:51:40.624720 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-xtables-lock\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624825 kubelet[2702]: I0702 01:51:40.624739 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-kernel\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624825 kubelet[2702]: I0702 01:51:40.624766 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cni-path\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624825 kubelet[2702]: I0702 01:51:40.624787 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-cgroup\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624825 kubelet[2702]: I0702 01:51:40.624808 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-etc-cni-netd\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624959 kubelet[2702]: I0702 01:51:40.624827 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e14df043-3e6d-4010-837c-b5c23edbf10b-clustermesh-secrets\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624959 kubelet[2702]: I0702 01:51:40.624859 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-config-path\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624959 kubelet[2702]: I0702 01:51:40.624879 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-bpf-maps\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.624959 kubelet[2702]: I0702 01:51:40.624898 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b284655e-070b-4930-9815-b1f4245bc8b9-xtables-lock\") pod \"kube-proxy-glrq5\" (UID: \"b284655e-070b-4930-9815-b1f4245bc8b9\") " pod="kube-system/kube-proxy-glrq5" Jul 2 01:51:40.624959 kubelet[2702]: I0702 01:51:40.624929 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmlcq\" (UniqueName: \"kubernetes.io/projected/b284655e-070b-4930-9815-b1f4245bc8b9-kube-api-access-tmlcq\") pod \"kube-proxy-glrq5\" (UID: \"b284655e-070b-4930-9815-b1f4245bc8b9\") " pod="kube-system/kube-proxy-glrq5" Jul 2 01:51:40.625070 kubelet[2702]: I0702 01:51:40.624949 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-lib-modules\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.625070 kubelet[2702]: I0702 01:51:40.624967 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-net\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.625070 kubelet[2702]: I0702 01:51:40.624986 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-run\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.625070 kubelet[2702]: I0702 01:51:40.625019 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-hubble-tls\") pod \"cilium-xssjf\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " pod="kube-system/cilium-xssjf" Jul 2 01:51:40.625070 kubelet[2702]: I0702 01:51:40.625043 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b284655e-070b-4930-9815-b1f4245bc8b9-kube-proxy\") pod \"kube-proxy-glrq5\" (UID: \"b284655e-070b-4930-9815-b1f4245bc8b9\") " pod="kube-system/kube-proxy-glrq5" Jul 2 01:51:40.625070 kubelet[2702]: I0702 01:51:40.625063 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b284655e-070b-4930-9815-b1f4245bc8b9-lib-modules\") pod \"kube-proxy-glrq5\" (UID: \"b284655e-070b-4930-9815-b1f4245bc8b9\") " pod="kube-system/kube-proxy-glrq5" Jul 2 01:51:40.692912 kubelet[2702]: I0702 01:51:40.692811 2702 topology_manager.go:215] "Topology Admit Handler" podUID="96870388-9737-4248-94c0-2038639d3961" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-h7572" Jul 2 01:51:40.826942 kubelet[2702]: I0702 01:51:40.826903 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dwm\" (UniqueName: \"kubernetes.io/projected/96870388-9737-4248-94c0-2038639d3961-kube-api-access-w6dwm\") pod \"cilium-operator-6bc8ccdb58-h7572\" (UID: \"96870388-9737-4248-94c0-2038639d3961\") " pod="kube-system/cilium-operator-6bc8ccdb58-h7572" Jul 2 01:51:40.827093 kubelet[2702]: I0702 01:51:40.826953 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96870388-9737-4248-94c0-2038639d3961-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-h7572\" (UID: \"96870388-9737-4248-94c0-2038639d3961\") " pod="kube-system/cilium-operator-6bc8ccdb58-h7572" Jul 2 01:51:41.040628 env[1557]: time="2024-07-02T01:51:41.040339866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-glrq5,Uid:b284655e-070b-4930-9815-b1f4245bc8b9,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:41.080102 env[1557]: time="2024-07-02T01:51:41.080026509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:41.080102 env[1557]: time="2024-07-02T01:51:41.080073500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:41.080310 env[1557]: time="2024-07-02T01:51:41.080085098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:41.080592 env[1557]: time="2024-07-02T01:51:41.080552618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf5695b528c6018536b42087a26d54677acd460af9db6070867affa6866e5a69 pid=2782 runtime=io.containerd.runc.v2 Jul 2 01:51:41.117001 env[1557]: time="2024-07-02T01:51:41.116944869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-glrq5,Uid:b284655e-070b-4930-9815-b1f4245bc8b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf5695b528c6018536b42087a26d54677acd460af9db6070867affa6866e5a69\"" Jul 2 01:51:41.121416 env[1557]: time="2024-07-02T01:51:41.121377105Z" level=info msg="CreateContainer within sandbox \"bf5695b528c6018536b42087a26d54677acd460af9db6070867affa6866e5a69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 01:51:41.166150 env[1557]: time="2024-07-02T01:51:41.166105439Z" level=info msg="CreateContainer within sandbox \"bf5695b528c6018536b42087a26d54677acd460af9db6070867affa6866e5a69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83688ca76a245061d2873071eb9efe60b79300d0d0610ff13389a88ebeff769c\"" Jul 2 01:51:41.167851 env[1557]: time="2024-07-02T01:51:41.167819264Z" level=info msg="StartContainer for \"83688ca76a245061d2873071eb9efe60b79300d0d0610ff13389a88ebeff769c\"" Jul 2 01:51:41.223961 env[1557]: time="2024-07-02T01:51:41.223914880Z" level=info msg="StartContainer for \"83688ca76a245061d2873071eb9efe60b79300d0d0610ff13389a88ebeff769c\" returns successfully" Jul 2 01:51:41.730590 kubelet[2702]: E0702 01:51:41.730560 2702 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 01:51:41.731094 kubelet[2702]: E0702 01:51:41.731077 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-config-path podName:e14df043-3e6d-4010-837c-b5c23edbf10b nodeName:}" failed. No retries permitted until 2024-07-02 01:51:42.231051034 +0000 UTC m=+16.601145252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-config-path") pod "cilium-xssjf" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b") : failed to sync configmap cache: timed out waiting for the condition Jul 2 01:51:41.927675 kubelet[2702]: E0702 01:51:41.927640 2702 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 01:51:41.927824 kubelet[2702]: E0702 01:51:41.927729 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96870388-9737-4248-94c0-2038639d3961-cilium-config-path podName:96870388-9737-4248-94c0-2038639d3961 nodeName:}" failed. No retries permitted until 2024-07-02 01:51:42.427707595 +0000 UTC m=+16.797801773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/96870388-9737-4248-94c0-2038639d3961-cilium-config-path") pod "cilium-operator-6bc8ccdb58-h7572" (UID: "96870388-9737-4248-94c0-2038639d3961") : failed to sync configmap cache: timed out waiting for the condition Jul 2 01:51:42.267665 env[1557]: time="2024-07-02T01:51:42.267583980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xssjf,Uid:e14df043-3e6d-4010-837c-b5c23edbf10b,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:42.306820 env[1557]: time="2024-07-02T01:51:42.306747199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:42.307023 env[1557]: time="2024-07-02T01:51:42.306999796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:42.307166 env[1557]: time="2024-07-02T01:51:42.307109258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:42.307485 env[1557]: time="2024-07-02T01:51:42.307439002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a pid=2979 runtime=io.containerd.runc.v2 Jul 2 01:51:42.346255 env[1557]: time="2024-07-02T01:51:42.346215567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xssjf,Uid:e14df043-3e6d-4010-837c-b5c23edbf10b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\"" Jul 2 01:51:42.350453 env[1557]: time="2024-07-02T01:51:42.350413937Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 01:51:42.496033 env[1557]: time="2024-07-02T01:51:42.495987647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-h7572,Uid:96870388-9737-4248-94c0-2038639d3961,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:42.539246 env[1557]: time="2024-07-02T01:51:42.538790011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:42.539437 env[1557]: time="2024-07-02T01:51:42.539403348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:42.539542 env[1557]: time="2024-07-02T01:51:42.539516288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:42.539917 env[1557]: time="2024-07-02T01:51:42.539874828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8 pid=3021 runtime=io.containerd.runc.v2 Jul 2 01:51:42.581968 env[1557]: time="2024-07-02T01:51:42.581922919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-h7572,Uid:96870388-9737-4248-94c0-2038639d3961,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\"" Jul 2 01:51:46.987700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464583166.mount: Deactivated successfully. Jul 2 01:51:49.805344 env[1557]: time="2024-07-02T01:51:49.805292359Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:49.815896 env[1557]: time="2024-07-02T01:51:49.815816549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:49.821352 env[1557]: time="2024-07-02T01:51:49.821306450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:49.822022 env[1557]: time="2024-07-02T01:51:49.821996268Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 01:51:49.823404 env[1557]: time="2024-07-02T01:51:49.823366423Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 01:51:49.826825 env[1557]: time="2024-07-02T01:51:49.826784993Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 01:51:49.856512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543936651.mount: Deactivated successfully. Jul 2 01:51:49.863452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651399008.mount: Deactivated successfully. Jul 2 01:51:49.874391 env[1557]: time="2024-07-02T01:51:49.874342979Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\"" Jul 2 01:51:49.875246 env[1557]: time="2024-07-02T01:51:49.875214289Z" level=info msg="StartContainer for \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\"" Jul 2 01:51:49.923398 env[1557]: time="2024-07-02T01:51:49.923349949Z" level=info msg="StartContainer for \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\" returns successfully" Jul 2 01:51:50.853662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6-rootfs.mount: Deactivated successfully. Jul 2 01:51:50.998410 kubelet[2702]: I0702 01:51:50.870115 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-glrq5" podStartSLOduration=10.870070819 podCreationTimestamp="2024-07-02 01:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:41.826944474 +0000 UTC m=+16.197038692" watchObservedRunningTime="2024-07-02 01:51:50.870070819 +0000 UTC m=+25.240165037" Jul 2 01:51:51.022550 env[1557]: time="2024-07-02T01:51:51.022504952Z" level=info msg="shim disconnected" id=faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6 Jul 2 01:51:51.023026 env[1557]: time="2024-07-02T01:51:51.023004320Z" level=warning msg="cleaning up after shim disconnected" id=faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6 namespace=k8s.io Jul 2 01:51:51.023168 env[1557]: time="2024-07-02T01:51:51.023152698Z" level=info msg="cleaning up dead shim" Jul 2 01:51:51.032487 env[1557]: time="2024-07-02T01:51:51.032446998Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3102 runtime=io.containerd.runc.v2\n" Jul 2 01:51:51.861762 env[1557]: time="2024-07-02T01:51:51.859935512Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 01:51:51.896532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965765082.mount: Deactivated successfully. Jul 2 01:51:51.903947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4229050970.mount: Deactivated successfully. Jul 2 01:51:51.919429 env[1557]: time="2024-07-02T01:51:51.919355581Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\"" Jul 2 01:51:51.922230 env[1557]: time="2024-07-02T01:51:51.920263890Z" level=info msg="StartContainer for \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\"" Jul 2 01:51:51.979476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 01:51:51.979802 systemd[1]: Stopped systemd-sysctl.service. Jul 2 01:51:51.979995 systemd[1]: Stopping systemd-sysctl.service... Jul 2 01:51:51.982551 systemd[1]: Starting systemd-sysctl.service... Jul 2 01:51:51.991314 env[1557]: time="2024-07-02T01:51:51.991271247Z" level=info msg="StartContainer for \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\" returns successfully" Jul 2 01:51:51.993358 systemd[1]: Finished systemd-sysctl.service. Jul 2 01:51:52.045935 env[1557]: time="2024-07-02T01:51:52.045883393Z" level=info msg="shim disconnected" id=2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d Jul 2 01:51:52.045935 env[1557]: time="2024-07-02T01:51:52.045934066Z" level=warning msg="cleaning up after shim disconnected" id=2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d namespace=k8s.io Jul 2 01:51:52.045935 env[1557]: time="2024-07-02T01:51:52.045942625Z" level=info msg="cleaning up dead shim" Jul 2 01:51:52.054127 env[1557]: time="2024-07-02T01:51:52.054078030Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3166 runtime=io.containerd.runc.v2\n" Jul 2 01:51:52.642615 env[1557]: time="2024-07-02T01:51:52.642552482Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:52.649743 env[1557]: time="2024-07-02T01:51:52.649706427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:52.653626 env[1557]: time="2024-07-02T01:51:52.653587716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:52.654088 env[1557]: time="2024-07-02T01:51:52.654062848Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 01:51:52.657187 env[1557]: time="2024-07-02T01:51:52.657057583Z" level=info msg="CreateContainer within sandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 01:51:52.685384 env[1557]: time="2024-07-02T01:51:52.685354608Z" level=info msg="CreateContainer within sandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\"" Jul 2 01:51:52.686011 env[1557]: time="2024-07-02T01:51:52.685988438Z" level=info msg="StartContainer for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\"" Jul 2 01:51:52.737906 env[1557]: time="2024-07-02T01:51:52.737855158Z" level=info msg="StartContainer for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" returns successfully" Jul 2 01:51:52.858361 env[1557]: time="2024-07-02T01:51:52.858318743Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 01:51:52.892742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d-rootfs.mount: Deactivated successfully. Jul 2 01:51:52.904937 env[1557]: time="2024-07-02T01:51:52.904877936Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\"" Jul 2 01:51:52.905518 env[1557]: time="2024-07-02T01:51:52.905495008Z" level=info msg="StartContainer for \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\"" Jul 2 01:51:52.959384 systemd[1]: run-containerd-runc-k8s.io-abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7-runc.omleg5.mount: Deactivated successfully. Jul 2 01:51:52.979294 kubelet[2702]: I0702 01:51:52.979087 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-h7572" podStartSLOduration=2.9078248 podCreationTimestamp="2024-07-02 01:51:40 +0000 UTC" firstStartedPulling="2024-07-02 01:51:42.583109799 +0000 UTC m=+16.953204017" lastFinishedPulling="2024-07-02 01:51:52.65433289 +0000 UTC m=+27.024427108" observedRunningTime="2024-07-02 01:51:52.927732373 +0000 UTC m=+27.297826591" watchObservedRunningTime="2024-07-02 01:51:52.979047891 +0000 UTC m=+27.349142109" Jul 2 01:51:53.044326 env[1557]: time="2024-07-02T01:51:53.044282133Z" level=info msg="StartContainer for \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\" returns successfully" Jul 2 01:51:53.081748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7-rootfs.mount: Deactivated successfully. Jul 2 01:51:53.438161 env[1557]: time="2024-07-02T01:51:53.438110500Z" level=info msg="shim disconnected" id=abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7 Jul 2 01:51:53.438161 env[1557]: time="2024-07-02T01:51:53.438157533Z" level=warning msg="cleaning up after shim disconnected" id=abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7 namespace=k8s.io Jul 2 01:51:53.438161 env[1557]: time="2024-07-02T01:51:53.438168652Z" level=info msg="cleaning up dead shim" Jul 2 01:51:53.445721 env[1557]: time="2024-07-02T01:51:53.445673684Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3256 runtime=io.containerd.runc.v2\n" Jul 2 01:51:53.865647 env[1557]: time="2024-07-02T01:51:53.863787980Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 01:51:53.914970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989528862.mount: Deactivated successfully. Jul 2 01:51:53.927337 env[1557]: time="2024-07-02T01:51:53.927289313Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\"" Jul 2 01:51:53.929692 env[1557]: time="2024-07-02T01:51:53.929657662Z" level=info msg="StartContainer for \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\"" Jul 2 01:51:53.983346 env[1557]: time="2024-07-02T01:51:53.983304251Z" level=info msg="StartContainer for \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\" returns successfully" Jul 2 01:51:54.014193 env[1557]: time="2024-07-02T01:51:54.014149214Z" level=info msg="shim disconnected" id=5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a Jul 2 01:51:54.014558 env[1557]: time="2024-07-02T01:51:54.014535401Z" level=warning msg="cleaning up after shim disconnected" id=5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a namespace=k8s.io Jul 2 01:51:54.014672 env[1557]: time="2024-07-02T01:51:54.014657104Z" level=info msg="cleaning up dead shim" Jul 2 01:51:54.023047 env[1557]: time="2024-07-02T01:51:54.023012636Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3313 runtime=io.containerd.runc.v2\n" Jul 2 01:51:54.882616 env[1557]: time="2024-07-02T01:51:54.879718213Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 01:51:54.907417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a-rootfs.mount: Deactivated successfully. Jul 2 01:51:54.927977 env[1557]: time="2024-07-02T01:51:54.927851957Z" level=info msg="CreateContainer within sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\"" Jul 2 01:51:54.928366 env[1557]: time="2024-07-02T01:51:54.928343690Z" level=info msg="StartContainer for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\"" Jul 2 01:51:55.008577 env[1557]: time="2024-07-02T01:51:55.008529686Z" level=info msg="StartContainer for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" returns successfully" Jul 2 01:51:55.123672 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 01:51:55.154787 kubelet[2702]: I0702 01:51:55.153766 2702 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 01:51:55.179827 kubelet[2702]: I0702 01:51:55.179788 2702 topology_manager.go:215] "Topology Admit Handler" podUID="58d6eafc-84ff-42e6-ba11-f446ef71d0c5" podNamespace="kube-system" podName="coredns-5dd5756b68-799pl" Jul 2 01:51:55.184518 kubelet[2702]: I0702 01:51:55.184487 2702 topology_manager.go:215] "Topology Admit Handler" podUID="423386af-fd44-4ab9-9908-c81328827b77" podNamespace="kube-system" podName="coredns-5dd5756b68-rrhbm" Jul 2 01:51:55.313467 kubelet[2702]: I0702 01:51:55.313436 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7df\" (UniqueName: \"kubernetes.io/projected/58d6eafc-84ff-42e6-ba11-f446ef71d0c5-kube-api-access-kh7df\") pod \"coredns-5dd5756b68-799pl\" (UID: \"58d6eafc-84ff-42e6-ba11-f446ef71d0c5\") " pod="kube-system/coredns-5dd5756b68-799pl" Jul 2 01:51:55.313688 kubelet[2702]: I0702 01:51:55.313675 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423386af-fd44-4ab9-9908-c81328827b77-config-volume\") pod \"coredns-5dd5756b68-rrhbm\" (UID: \"423386af-fd44-4ab9-9908-c81328827b77\") " pod="kube-system/coredns-5dd5756b68-rrhbm" Jul 2 01:51:55.313791 kubelet[2702]: I0702 01:51:55.313780 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d6eafc-84ff-42e6-ba11-f446ef71d0c5-config-volume\") pod \"coredns-5dd5756b68-799pl\" (UID: \"58d6eafc-84ff-42e6-ba11-f446ef71d0c5\") " pod="kube-system/coredns-5dd5756b68-799pl" Jul 2 01:51:55.313888 kubelet[2702]: I0702 01:51:55.313879 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrsqr\" (UniqueName: \"kubernetes.io/projected/423386af-fd44-4ab9-9908-c81328827b77-kube-api-access-vrsqr\") pod \"coredns-5dd5756b68-rrhbm\" (UID: \"423386af-fd44-4ab9-9908-c81328827b77\") " pod="kube-system/coredns-5dd5756b68-rrhbm" Jul 2 01:51:55.484520 env[1557]: time="2024-07-02T01:51:55.484266076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-799pl,Uid:58d6eafc-84ff-42e6-ba11-f446ef71d0c5,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:55.489098 env[1557]: time="2024-07-02T01:51:55.489056428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rrhbm,Uid:423386af-fd44-4ab9-9908-c81328827b77,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:55.605689 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 01:51:55.882234 kubelet[2702]: I0702 01:51:55.882199 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xssjf" podStartSLOduration=8.407240263 podCreationTimestamp="2024-07-02 01:51:40 +0000 UTC" firstStartedPulling="2024-07-02 01:51:42.347539543 +0000 UTC m=+16.717633761" lastFinishedPulling="2024-07-02 01:51:49.822455799 +0000 UTC m=+24.192550017" observedRunningTime="2024-07-02 01:51:55.881234163 +0000 UTC m=+30.251328381" watchObservedRunningTime="2024-07-02 01:51:55.882156519 +0000 UTC m=+30.252250737" Jul 2 01:51:56.842957 systemd-networkd[1730]: cilium_host: Link UP Jul 2 01:51:56.843578 systemd-networkd[1730]: cilium_net: Link UP Jul 2 01:51:56.843582 systemd-networkd[1730]: cilium_net: Gained carrier Jul 2 01:51:56.851127 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 01:51:56.851313 systemd-networkd[1730]: cilium_host: Gained carrier Jul 2 01:51:57.012689 systemd-networkd[1730]: cilium_vxlan: Link UP Jul 2 01:51:57.012695 systemd-networkd[1730]: cilium_vxlan: Gained carrier Jul 2 01:51:57.282618 kernel: NET: Registered PF_ALG protocol family Jul 2 01:51:57.715910 systemd-networkd[1730]: cilium_net: Gained IPv6LL Jul 2 01:51:57.843811 systemd-networkd[1730]: cilium_host: Gained IPv6LL Jul 2 01:51:58.066893 systemd-networkd[1730]: lxc_health: Link UP Jul 2 01:51:58.082627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 01:51:58.081725 systemd-networkd[1730]: lxc_health: Gained carrier Jul 2 01:51:58.547742 systemd-networkd[1730]: cilium_vxlan: Gained IPv6LL Jul 2 01:51:58.565570 systemd-networkd[1730]: lxc0d2a0241da6f: Link UP Jul 2 01:51:58.574649 kernel: eth0: renamed from tmp9cf27 Jul 2 01:51:58.587139 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0d2a0241da6f: link becomes ready Jul 2 01:51:58.586078 systemd-networkd[1730]: lxc0d2a0241da6f: Gained carrier Jul 2 01:51:58.602887 systemd-networkd[1730]: lxc31ac213eefd9: Link UP Jul 2 01:51:58.616702 kernel: eth0: renamed from tmp3da5e Jul 2 01:51:58.632807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc31ac213eefd9: link becomes ready Jul 2 01:51:58.633085 systemd-networkd[1730]: lxc31ac213eefd9: Gained carrier Jul 2 01:51:59.700714 systemd-networkd[1730]: lxc_health: Gained IPv6LL Jul 2 01:52:00.083731 systemd-networkd[1730]: lxc31ac213eefd9: Gained IPv6LL Jul 2 01:52:00.403732 systemd-networkd[1730]: lxc0d2a0241da6f: Gained IPv6LL Jul 2 01:52:02.293875 env[1557]: time="2024-07-02T01:52:02.293794753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:52:02.294221 env[1557]: time="2024-07-02T01:52:02.293886661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:52:02.294221 env[1557]: time="2024-07-02T01:52:02.293913858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:52:02.295787 env[1557]: time="2024-07-02T01:52:02.295732436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3da5e044f65ac904ac7d4a7871dd666e4de142b4836c1c14e369577a5a652def pid=3863 runtime=io.containerd.runc.v2 Jul 2 01:52:02.302875 env[1557]: time="2024-07-02T01:52:02.302800093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:52:02.303055 env[1557]: time="2024-07-02T01:52:02.303033184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:52:02.303148 env[1557]: time="2024-07-02T01:52:02.303129372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:52:02.303381 env[1557]: time="2024-07-02T01:52:02.303347346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cf279ff5fa4df62816022830e0a74a0014f488557ed6c737d1b8d477efeef7f pid=3880 runtime=io.containerd.runc.v2 Jul 2 01:52:02.395663 env[1557]: time="2024-07-02T01:52:02.395623315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-799pl,Uid:58d6eafc-84ff-42e6-ba11-f446ef71d0c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cf279ff5fa4df62816022830e0a74a0014f488557ed6c737d1b8d477efeef7f\"" Jul 2 01:52:02.402788 env[1557]: time="2024-07-02T01:52:02.402737566Z" level=info msg="CreateContainer within sandbox \"9cf279ff5fa4df62816022830e0a74a0014f488557ed6c737d1b8d477efeef7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 01:52:02.419269 env[1557]: time="2024-07-02T01:52:02.419224152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rrhbm,Uid:423386af-fd44-4ab9-9908-c81328827b77,Namespace:kube-system,Attempt:0,} returns sandbox id \"3da5e044f65ac904ac7d4a7871dd666e4de142b4836c1c14e369577a5a652def\"" Jul 2 01:52:02.426636 env[1557]: time="2024-07-02T01:52:02.425842744Z" level=info msg="CreateContainer within sandbox \"3da5e044f65ac904ac7d4a7871dd666e4de142b4836c1c14e369577a5a652def\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 01:52:02.453502 env[1557]: time="2024-07-02T01:52:02.453443452Z" level=info msg="CreateContainer within sandbox \"9cf279ff5fa4df62816022830e0a74a0014f488557ed6c737d1b8d477efeef7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9dec1aeac05c6219ab2f5a7644f220ced66670a45fec8a801800b1e7263215c8\"" Jul 2 01:52:02.454447 env[1557]: time="2024-07-02T01:52:02.454420333Z" level=info msg="StartContainer for \"9dec1aeac05c6219ab2f5a7644f220ced66670a45fec8a801800b1e7263215c8\"" Jul 2 01:52:02.476248 env[1557]: time="2024-07-02T01:52:02.476125722Z" level=info msg="CreateContainer within sandbox \"3da5e044f65ac904ac7d4a7871dd666e4de142b4836c1c14e369577a5a652def\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acf24769eec7141c6b483c5105fdae3a5816c91852824db732aec907a96726ee\"" Jul 2 01:52:02.479382 env[1557]: time="2024-07-02T01:52:02.479338090Z" level=info msg="StartContainer for \"acf24769eec7141c6b483c5105fdae3a5816c91852824db732aec907a96726ee\"" Jul 2 01:52:02.529785 env[1557]: time="2024-07-02T01:52:02.529739653Z" level=info msg="StartContainer for \"9dec1aeac05c6219ab2f5a7644f220ced66670a45fec8a801800b1e7263215c8\" returns successfully" Jul 2 01:52:02.583732 env[1557]: time="2024-07-02T01:52:02.583581117Z" level=info msg="StartContainer for \"acf24769eec7141c6b483c5105fdae3a5816c91852824db732aec907a96726ee\" returns successfully" Jul 2 01:52:02.897705 kubelet[2702]: I0702 01:52:02.897578 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rrhbm" podStartSLOduration=22.89752293 podCreationTimestamp="2024-07-02 01:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:52:02.896272043 +0000 UTC m=+37.266366261" watchObservedRunningTime="2024-07-02 01:52:02.89752293 +0000 UTC m=+37.267617108" Jul 2 01:52:02.913248 kubelet[2702]: I0702 01:52:02.913190 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-799pl" podStartSLOduration=22.913150502 podCreationTimestamp="2024-07-02 01:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:52:02.912409472 +0000 UTC m=+37.282503690" watchObservedRunningTime="2024-07-02 01:52:02.913150502 +0000 UTC m=+37.283244760" Jul 2 01:52:03.300034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912944738.mount: Deactivated successfully. Jul 2 01:53:52.118650 systemd[1]: Started sshd@5-10.200.20.41:22-10.200.16.10:57850.service. Jul 2 01:53:52.589154 sshd[4032]: Accepted publickey for core from 10.200.16.10 port 57850 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:53:52.590535 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:53:52.594501 systemd-logind[1543]: New session 8 of user core. Jul 2 01:53:52.594938 systemd[1]: Started session-8.scope. Jul 2 01:53:53.056829 sshd[4032]: pam_unix(sshd:session): session closed for user core Jul 2 01:53:53.059915 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Jul 2 01:53:53.060673 systemd[1]: sshd@5-10.200.20.41:22-10.200.16.10:57850.service: Deactivated successfully. Jul 2 01:53:53.061662 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 01:53:53.062162 systemd-logind[1543]: Removed session 8. Jul 2 01:53:58.136578 systemd[1]: Started sshd@6-10.200.20.41:22-10.200.16.10:57856.service. Jul 2 01:53:58.608622 sshd[4045]: Accepted publickey for core from 10.200.16.10 port 57856 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:53:58.609911 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:53:58.614393 systemd[1]: Started session-9.scope. Jul 2 01:53:58.615379 systemd-logind[1543]: New session 9 of user core. Jul 2 01:53:59.016947 sshd[4045]: pam_unix(sshd:session): session closed for user core Jul 2 01:53:59.019667 systemd[1]: sshd@6-10.200.20.41:22-10.200.16.10:57856.service: Deactivated successfully. Jul 2 01:53:59.021017 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 01:53:59.021518 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Jul 2 01:53:59.022571 systemd-logind[1543]: Removed session 9. Jul 2 01:54:04.095072 systemd[1]: Started sshd@7-10.200.20.41:22-10.200.16.10:39514.service. Jul 2 01:54:04.565711 sshd[4058]: Accepted publickey for core from 10.200.16.10 port 39514 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:04.567407 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:04.571249 systemd-logind[1543]: New session 10 of user core. Jul 2 01:54:04.571740 systemd[1]: Started session-10.scope. Jul 2 01:54:04.971641 sshd[4058]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:04.974443 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Jul 2 01:54:04.975192 systemd[1]: sshd@7-10.200.20.41:22-10.200.16.10:39514.service: Deactivated successfully. Jul 2 01:54:04.976062 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 01:54:04.976918 systemd-logind[1543]: Removed session 10. Jul 2 01:54:10.048560 systemd[1]: Started sshd@8-10.200.20.41:22-10.200.16.10:60746.service. Jul 2 01:54:10.518269 sshd[4071]: Accepted publickey for core from 10.200.16.10 port 60746 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:10.519911 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:10.524379 systemd[1]: Started session-11.scope. Jul 2 01:54:10.525451 systemd-logind[1543]: New session 11 of user core. Jul 2 01:54:10.927870 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:10.930780 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Jul 2 01:54:10.931888 systemd[1]: sshd@8-10.200.20.41:22-10.200.16.10:60746.service: Deactivated successfully. Jul 2 01:54:10.932776 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 01:54:10.934242 systemd-logind[1543]: Removed session 11. Jul 2 01:54:10.997966 systemd[1]: Started sshd@9-10.200.20.41:22-10.200.16.10:60756.service. Jul 2 01:54:11.428957 sshd[4086]: Accepted publickey for core from 10.200.16.10 port 60756 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:11.430720 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:11.434904 systemd-logind[1543]: New session 12 of user core. Jul 2 01:54:11.435416 systemd[1]: Started session-12.scope. Jul 2 01:54:12.430817 sshd[4086]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:12.433380 systemd[1]: sshd@9-10.200.20.41:22-10.200.16.10:60756.service: Deactivated successfully. Jul 2 01:54:12.434516 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 01:54:12.434544 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Jul 2 01:54:12.435624 systemd-logind[1543]: Removed session 12. Jul 2 01:54:12.501493 systemd[1]: Started sshd@10-10.200.20.41:22-10.200.16.10:60772.service. Jul 2 01:54:12.937572 sshd[4099]: Accepted publickey for core from 10.200.16.10 port 60772 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:12.939294 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:12.944865 systemd[1]: Started session-13.scope. Jul 2 01:54:12.945258 systemd-logind[1543]: New session 13 of user core. Jul 2 01:54:13.332309 sshd[4099]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:13.334710 systemd[1]: sshd@10-10.200.20.41:22-10.200.16.10:60772.service: Deactivated successfully. Jul 2 01:54:13.336290 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 01:54:13.336954 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Jul 2 01:54:13.338077 systemd-logind[1543]: Removed session 13. Jul 2 01:54:18.409401 systemd[1]: Started sshd@11-10.200.20.41:22-10.200.16.10:47416.service. Jul 2 01:54:18.879313 sshd[4111]: Accepted publickey for core from 10.200.16.10 port 47416 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:18.881100 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:18.885970 systemd[1]: Started session-14.scope. Jul 2 01:54:18.886657 systemd-logind[1543]: New session 14 of user core. Jul 2 01:54:19.283934 sshd[4111]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:19.286688 systemd[1]: sshd@11-10.200.20.41:22-10.200.16.10:47416.service: Deactivated successfully. Jul 2 01:54:19.287482 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 01:54:19.288078 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Jul 2 01:54:19.288861 systemd-logind[1543]: Removed session 14. Jul 2 01:54:24.361432 systemd[1]: Started sshd@12-10.200.20.41:22-10.200.16.10:47428.service. Jul 2 01:54:24.831739 sshd[4124]: Accepted publickey for core from 10.200.16.10 port 47428 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:24.833411 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:24.837725 systemd[1]: Started session-15.scope. Jul 2 01:54:24.838895 systemd-logind[1543]: New session 15 of user core. Jul 2 01:54:25.242723 sshd[4124]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:25.246032 systemd[1]: sshd@12-10.200.20.41:22-10.200.16.10:47428.service: Deactivated successfully. Jul 2 01:54:25.247655 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 01:54:25.248259 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Jul 2 01:54:25.249394 systemd-logind[1543]: Removed session 15. Jul 2 01:54:25.319664 systemd[1]: Started sshd@13-10.200.20.41:22-10.200.16.10:47438.service. Jul 2 01:54:25.790252 sshd[4137]: Accepted publickey for core from 10.200.16.10 port 47438 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:25.791990 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:25.795698 systemd-logind[1543]: New session 16 of user core. Jul 2 01:54:25.796431 systemd[1]: Started session-16.scope. Jul 2 01:54:26.231794 sshd[4137]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:26.234153 systemd[1]: sshd@13-10.200.20.41:22-10.200.16.10:47438.service: Deactivated successfully. Jul 2 01:54:26.235256 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Jul 2 01:54:26.235308 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 01:54:26.236264 systemd-logind[1543]: Removed session 16. Jul 2 01:54:26.302405 systemd[1]: Started sshd@14-10.200.20.41:22-10.200.16.10:47450.service. Jul 2 01:54:26.737785 sshd[4150]: Accepted publickey for core from 10.200.16.10 port 47450 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:26.739380 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:26.743793 systemd[1]: Started session-17.scope. Jul 2 01:54:26.744751 systemd-logind[1543]: New session 17 of user core. Jul 2 01:54:28.191853 sshd[4150]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:28.194875 systemd[1]: sshd@14-10.200.20.41:22-10.200.16.10:47450.service: Deactivated successfully. Jul 2 01:54:28.195689 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 01:54:28.196113 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Jul 2 01:54:28.196860 systemd-logind[1543]: Removed session 17. Jul 2 01:54:28.263657 systemd[1]: Started sshd@15-10.200.20.41:22-10.200.16.10:47454.service. Jul 2 01:54:28.700653 sshd[4168]: Accepted publickey for core from 10.200.16.10 port 47454 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:28.701971 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:28.706277 systemd[1]: Started session-18.scope. Jul 2 01:54:28.706451 systemd-logind[1543]: New session 18 of user core. Jul 2 01:54:29.242981 sshd[4168]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:29.246124 systemd[1]: sshd@15-10.200.20.41:22-10.200.16.10:47454.service: Deactivated successfully. Jul 2 01:54:29.247193 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Jul 2 01:54:29.247254 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 01:54:29.248662 systemd-logind[1543]: Removed session 18. Jul 2 01:54:29.313271 systemd[1]: Started sshd@16-10.200.20.41:22-10.200.16.10:54232.service. Jul 2 01:54:29.744733 sshd[4179]: Accepted publickey for core from 10.200.16.10 port 54232 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:29.746383 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:29.750511 systemd-logind[1543]: New session 19 of user core. Jul 2 01:54:29.750971 systemd[1]: Started session-19.scope. Jul 2 01:54:30.128786 sshd[4179]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:30.131147 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Jul 2 01:54:30.131828 systemd[1]: sshd@16-10.200.20.41:22-10.200.16.10:54232.service: Deactivated successfully. Jul 2 01:54:30.132669 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 01:54:30.133137 systemd-logind[1543]: Removed session 19. Jul 2 01:54:35.205206 systemd[1]: Started sshd@17-10.200.20.41:22-10.200.16.10:54236.service. Jul 2 01:54:35.670346 sshd[4195]: Accepted publickey for core from 10.200.16.10 port 54236 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:35.671767 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:35.676590 systemd[1]: Started session-20.scope. Jul 2 01:54:35.677541 systemd-logind[1543]: New session 20 of user core. Jul 2 01:54:36.068745 sshd[4195]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:36.071407 systemd[1]: sshd@17-10.200.20.41:22-10.200.16.10:54236.service: Deactivated successfully. Jul 2 01:54:36.072758 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 01:54:36.072794 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Jul 2 01:54:36.073750 systemd-logind[1543]: Removed session 20. Jul 2 01:54:41.140805 systemd[1]: Started sshd@18-10.200.20.41:22-10.200.16.10:58790.service. Jul 2 01:54:41.575848 sshd[4208]: Accepted publickey for core from 10.200.16.10 port 58790 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:41.577146 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:41.582197 systemd[1]: Started session-21.scope. Jul 2 01:54:41.583233 systemd-logind[1543]: New session 21 of user core. Jul 2 01:54:41.963339 sshd[4208]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:41.966350 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Jul 2 01:54:41.967745 systemd[1]: sshd@18-10.200.20.41:22-10.200.16.10:58790.service: Deactivated successfully. Jul 2 01:54:41.968511 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 01:54:41.969955 systemd-logind[1543]: Removed session 21. Jul 2 01:54:47.040942 systemd[1]: Started sshd@19-10.200.20.41:22-10.200.16.10:58806.service. Jul 2 01:54:47.508911 sshd[4225]: Accepted publickey for core from 10.200.16.10 port 58806 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:47.510514 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:47.515104 systemd[1]: Started session-22.scope. Jul 2 01:54:47.516151 systemd-logind[1543]: New session 22 of user core. Jul 2 01:54:47.916808 sshd[4225]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:47.919167 systemd[1]: sshd@19-10.200.20.41:22-10.200.16.10:58806.service: Deactivated successfully. Jul 2 01:54:47.920020 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 01:54:47.921123 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Jul 2 01:54:47.921965 systemd-logind[1543]: Removed session 22. Jul 2 01:54:47.992013 systemd[1]: Started sshd@20-10.200.20.41:22-10.200.16.10:58816.service. Jul 2 01:54:48.457140 sshd[4237]: Accepted publickey for core from 10.200.16.10 port 58816 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:48.458761 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:48.463305 systemd[1]: Started session-23.scope. Jul 2 01:54:48.464300 systemd-logind[1543]: New session 23 of user core. Jul 2 01:54:51.308939 env[1557]: time="2024-07-02T01:54:51.306888574Z" level=info msg="StopContainer for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" with timeout 30 (s)" Jul 2 01:54:51.308939 env[1557]: time="2024-07-02T01:54:51.307206662Z" level=info msg="Stop container \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" with signal terminated" Jul 2 01:54:51.317525 systemd[1]: run-containerd-runc-k8s.io-f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b-runc.mOPIih.mount: Deactivated successfully. Jul 2 01:54:51.338724 env[1557]: time="2024-07-02T01:54:51.338659686Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 01:54:51.345903 env[1557]: time="2024-07-02T01:54:51.345870416Z" level=info msg="StopContainer for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" with timeout 2 (s)" Jul 2 01:54:51.346322 env[1557]: time="2024-07-02T01:54:51.346292026Z" level=info msg="Stop container \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" with signal terminated" Jul 2 01:54:51.352481 systemd-networkd[1730]: lxc_health: Link DOWN Jul 2 01:54:51.352488 systemd-networkd[1730]: lxc_health: Lost carrier Jul 2 01:54:51.355982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966-rootfs.mount: Deactivated successfully. Jul 2 01:54:51.392731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b-rootfs.mount: Deactivated successfully. Jul 2 01:54:51.410031 env[1557]: time="2024-07-02T01:54:51.409988492Z" level=info msg="shim disconnected" id=d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966 Jul 2 01:54:51.410248 env[1557]: time="2024-07-02T01:54:51.410229698Z" level=warning msg="cleaning up after shim disconnected" id=d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966 namespace=k8s.io Jul 2 01:54:51.410306 env[1557]: time="2024-07-02T01:54:51.410294659Z" level=info msg="cleaning up dead shim" Jul 2 01:54:51.410880 env[1557]: time="2024-07-02T01:54:51.410839112Z" level=info msg="shim disconnected" id=f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b Jul 2 01:54:51.410962 env[1557]: time="2024-07-02T01:54:51.410880153Z" level=warning msg="cleaning up after shim disconnected" id=f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b namespace=k8s.io Jul 2 01:54:51.410962 env[1557]: time="2024-07-02T01:54:51.410890793Z" level=info msg="cleaning up dead shim" Jul 2 01:54:51.417647 env[1557]: time="2024-07-02T01:54:51.417590832Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4306 runtime=io.containerd.runc.v2\n" Jul 2 01:54:51.418089 env[1557]: time="2024-07-02T01:54:51.418049123Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4307 runtime=io.containerd.runc.v2\n" Jul 2 01:54:51.426813 env[1557]: time="2024-07-02T01:54:51.426761049Z" level=info msg="StopContainer for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" returns successfully" Jul 2 01:54:51.427563 env[1557]: time="2024-07-02T01:54:51.427532267Z" level=info msg="StopPodSandbox for \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\"" Jul 2 01:54:51.427722 env[1557]: time="2024-07-02T01:54:51.427593108Z" level=info msg="Container to stop \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:51.429797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8-shm.mount: Deactivated successfully. Jul 2 01:54:51.431811 env[1557]: time="2024-07-02T01:54:51.431219714Z" level=info msg="StopContainer for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" returns successfully" Jul 2 01:54:51.432531 env[1557]: time="2024-07-02T01:54:51.432123976Z" level=info msg="StopPodSandbox for \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\"" Jul 2 01:54:51.432531 env[1557]: time="2024-07-02T01:54:51.432212178Z" level=info msg="Container to stop \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:51.432531 env[1557]: time="2024-07-02T01:54:51.432227458Z" level=info msg="Container to stop \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:51.432531 env[1557]: time="2024-07-02T01:54:51.432238738Z" level=info msg="Container to stop \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:51.432531 env[1557]: time="2024-07-02T01:54:51.432252659Z" level=info msg="Container to stop \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:51.432531 env[1557]: time="2024-07-02T01:54:51.432264139Z" level=info msg="Container to stop \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:51.482797 env[1557]: time="2024-07-02T01:54:51.482730972Z" level=info msg="shim disconnected" id=3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8 Jul 2 01:54:51.482797 env[1557]: time="2024-07-02T01:54:51.482792694Z" level=warning msg="cleaning up after shim disconnected" id=3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8 namespace=k8s.io Jul 2 01:54:51.482797 env[1557]: time="2024-07-02T01:54:51.482802054Z" level=info msg="cleaning up dead shim" Jul 2 01:54:51.483627 env[1557]: time="2024-07-02T01:54:51.483579232Z" level=info msg="shim disconnected" id=2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a Jul 2 01:54:51.483730 env[1557]: time="2024-07-02T01:54:51.483713715Z" level=warning msg="cleaning up after shim disconnected" id=2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a namespace=k8s.io Jul 2 01:54:51.483792 env[1557]: time="2024-07-02T01:54:51.483780357Z" level=info msg="cleaning up dead shim" Jul 2 01:54:51.493709 env[1557]: time="2024-07-02T01:54:51.493665191Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4375 runtime=io.containerd.runc.v2\n" Jul 2 01:54:51.494197 env[1557]: time="2024-07-02T01:54:51.494169883Z" level=info msg="TearDown network for sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" successfully" Jul 2 01:54:51.494445 env[1557]: time="2024-07-02T01:54:51.494424409Z" level=info msg="StopPodSandbox for \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" returns successfully" Jul 2 01:54:51.495666 env[1557]: time="2024-07-02T01:54:51.494729656Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4374 runtime=io.containerd.runc.v2\n" Jul 2 01:54:51.495666 env[1557]: time="2024-07-02T01:54:51.494987462Z" level=info msg="TearDown network for sandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" successfully" Jul 2 01:54:51.495666 env[1557]: time="2024-07-02T01:54:51.495010103Z" level=info msg="StopPodSandbox for \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" returns successfully" Jul 2 01:54:51.663364 kubelet[2702]: I0702 01:54:51.663243 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e14df043-3e6d-4010-837c-b5c23edbf10b-clustermesh-secrets\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663364 kubelet[2702]: I0702 01:54:51.663312 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-kernel\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663364 kubelet[2702]: I0702 01:54:51.663335 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-config-path\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663789 kubelet[2702]: I0702 01:54:51.663372 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96870388-9737-4248-94c0-2038639d3961-cilium-config-path\") pod \"96870388-9737-4248-94c0-2038639d3961\" (UID: \"96870388-9737-4248-94c0-2038639d3961\") " Jul 2 01:54:51.663789 kubelet[2702]: I0702 01:54:51.663390 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-cgroup\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663789 kubelet[2702]: I0702 01:54:51.663412 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-hubble-tls\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663789 kubelet[2702]: I0702 01:54:51.663444 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-run\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663789 kubelet[2702]: I0702 01:54:51.663462 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-etc-cni-netd\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663789 kubelet[2702]: I0702 01:54:51.663482 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-net\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663927 kubelet[2702]: I0702 01:54:51.663517 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pmgn\" (UniqueName: \"kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-kube-api-access-4pmgn\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663927 kubelet[2702]: I0702 01:54:51.663536 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-xtables-lock\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663927 kubelet[2702]: I0702 01:54:51.663553 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-hostproc\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663927 kubelet[2702]: I0702 01:54:51.663568 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cni-path\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663927 kubelet[2702]: I0702 01:54:51.663612 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-bpf-maps\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.663927 kubelet[2702]: I0702 01:54:51.663633 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-lib-modules\") pod \"e14df043-3e6d-4010-837c-b5c23edbf10b\" (UID: \"e14df043-3e6d-4010-837c-b5c23edbf10b\") " Jul 2 01:54:51.664065 kubelet[2702]: I0702 01:54:51.663653 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6dwm\" (UniqueName: \"kubernetes.io/projected/96870388-9737-4248-94c0-2038639d3961-kube-api-access-w6dwm\") pod \"96870388-9737-4248-94c0-2038639d3961\" (UID: \"96870388-9737-4248-94c0-2038639d3961\") " Jul 2 01:54:51.665465 kubelet[2702]: I0702 01:54:51.665439 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.665645 kubelet[2702]: I0702 01:54:51.665630 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.667704 kubelet[2702]: I0702 01:54:51.667644 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.668180 kubelet[2702]: I0702 01:54:51.668147 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.668246 kubelet[2702]: I0702 01:54:51.668182 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-hostproc" (OuterVolumeSpecName: "hostproc") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.668246 kubelet[2702]: I0702 01:54:51.668206 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cni-path" (OuterVolumeSpecName: "cni-path") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.668246 kubelet[2702]: I0702 01:54:51.668223 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.668246 kubelet[2702]: I0702 01:54:51.668237 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.668441 kubelet[2702]: I0702 01:54:51.668422 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14df043-3e6d-4010-837c-b5c23edbf10b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 01:54:51.669792 kubelet[2702]: I0702 01:54:51.669753 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 01:54:51.670164 kubelet[2702]: I0702 01:54:51.669853 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96870388-9737-4248-94c0-2038639d3961-kube-api-access-w6dwm" (OuterVolumeSpecName: "kube-api-access-w6dwm") pod "96870388-9737-4248-94c0-2038639d3961" (UID: "96870388-9737-4248-94c0-2038639d3961"). InnerVolumeSpecName "kube-api-access-w6dwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:51.670206 kubelet[2702]: I0702 01:54:51.670187 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.670234 kubelet[2702]: I0702 01:54:51.670225 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:51.671080 kubelet[2702]: I0702 01:54:51.671043 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96870388-9737-4248-94c0-2038639d3961-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96870388-9737-4248-94c0-2038639d3961" (UID: "96870388-9737-4248-94c0-2038639d3961"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 01:54:51.672362 kubelet[2702]: I0702 01:54:51.672334 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-kube-api-access-4pmgn" (OuterVolumeSpecName: "kube-api-access-4pmgn") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "kube-api-access-4pmgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:51.673325 kubelet[2702]: I0702 01:54:51.673299 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e14df043-3e6d-4010-837c-b5c23edbf10b" (UID: "e14df043-3e6d-4010-837c-b5c23edbf10b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:51.764181 kubelet[2702]: I0702 01:54:51.764155 2702 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4pmgn\" (UniqueName: \"kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-kube-api-access-4pmgn\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764326 kubelet[2702]: I0702 01:54:51.764316 2702 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-xtables-lock\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764389 kubelet[2702]: I0702 01:54:51.764380 2702 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-hostproc\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764446 kubelet[2702]: I0702 01:54:51.764438 2702 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cni-path\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764509 kubelet[2702]: I0702 01:54:51.764501 2702 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-bpf-maps\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764566 kubelet[2702]: I0702 01:54:51.764558 2702 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-lib-modules\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764651 kubelet[2702]: I0702 01:54:51.764643 2702 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w6dwm\" (UniqueName: \"kubernetes.io/projected/96870388-9737-4248-94c0-2038639d3961-kube-api-access-w6dwm\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764717 kubelet[2702]: I0702 01:54:51.764709 2702 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e14df043-3e6d-4010-837c-b5c23edbf10b-clustermesh-secrets\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764773 kubelet[2702]: I0702 01:54:51.764765 2702 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764830 kubelet[2702]: I0702 01:54:51.764821 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-config-path\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764892 kubelet[2702]: I0702 01:54:51.764882 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96870388-9737-4248-94c0-2038639d3961-cilium-config-path\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.764950 kubelet[2702]: I0702 01:54:51.764942 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-cgroup\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.765005 kubelet[2702]: I0702 01:54:51.764998 2702 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e14df043-3e6d-4010-837c-b5c23edbf10b-hubble-tls\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.765059 kubelet[2702]: I0702 01:54:51.765052 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-cilium-run\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.765136 kubelet[2702]: I0702 01:54:51.765106 2702 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-etc-cni-netd\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:51.765205 kubelet[2702]: I0702 01:54:51.765196 2702 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e14df043-3e6d-4010-837c-b5c23edbf10b-host-proc-sys-net\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:52.181770 kubelet[2702]: I0702 01:54:52.181741 2702 scope.go:117] "RemoveContainer" containerID="d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966" Jul 2 01:54:52.183918 env[1557]: time="2024-07-02T01:54:52.183884040Z" level=info msg="RemoveContainer for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\"" Jul 2 01:54:52.200976 env[1557]: time="2024-07-02T01:54:52.200932710Z" level=info msg="RemoveContainer for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" returns successfully" Jul 2 01:54:52.201461 kubelet[2702]: I0702 01:54:52.201432 2702 scope.go:117] "RemoveContainer" containerID="d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966" Jul 2 01:54:52.201886 env[1557]: time="2024-07-02T01:54:52.201813650Z" level=error msg="ContainerStatus for \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\": not found" Jul 2 01:54:52.202125 kubelet[2702]: E0702 01:54:52.202105 2702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\": not found" containerID="d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966" Jul 2 01:54:52.202215 kubelet[2702]: I0702 01:54:52.202198 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966"} err="failed to get container status \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\": rpc error: code = NotFound desc = an error occurred when try to find container \"d86e45487ea992809e66aeb3d9164890169f9b1eb6518c93d39e4bc9aef35966\": not found" Jul 2 01:54:52.202215 kubelet[2702]: I0702 01:54:52.202215 2702 scope.go:117] "RemoveContainer" containerID="f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b" Jul 2 01:54:52.203469 env[1557]: time="2024-07-02T01:54:52.203440247Z" level=info msg="RemoveContainer for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\"" Jul 2 01:54:52.214348 env[1557]: time="2024-07-02T01:54:52.214302735Z" level=info msg="RemoveContainer for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" returns successfully" Jul 2 01:54:52.214891 kubelet[2702]: I0702 01:54:52.214805 2702 scope.go:117] "RemoveContainer" containerID="5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a" Jul 2 01:54:52.216127 env[1557]: time="2024-07-02T01:54:52.216090696Z" level=info msg="RemoveContainer for \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\"" Jul 2 01:54:52.224587 env[1557]: time="2024-07-02T01:54:52.224543848Z" level=info msg="RemoveContainer for \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\" returns successfully" Jul 2 01:54:52.224902 kubelet[2702]: I0702 01:54:52.224877 2702 scope.go:117] "RemoveContainer" containerID="abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7" Jul 2 01:54:52.226224 env[1557]: time="2024-07-02T01:54:52.226199006Z" level=info msg="RemoveContainer for \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\"" Jul 2 01:54:52.234182 env[1557]: time="2024-07-02T01:54:52.234140787Z" level=info msg="RemoveContainer for \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\" returns successfully" Jul 2 01:54:52.234672 kubelet[2702]: I0702 01:54:52.234560 2702 scope.go:117] "RemoveContainer" containerID="2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d" Jul 2 01:54:52.235808 env[1557]: time="2024-07-02T01:54:52.235770545Z" level=info msg="RemoveContainer for \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\"" Jul 2 01:54:52.245846 env[1557]: time="2024-07-02T01:54:52.245807334Z" level=info msg="RemoveContainer for \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\" returns successfully" Jul 2 01:54:52.246163 kubelet[2702]: I0702 01:54:52.246047 2702 scope.go:117] "RemoveContainer" containerID="faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6" Jul 2 01:54:52.247373 env[1557]: time="2024-07-02T01:54:52.247346209Z" level=info msg="RemoveContainer for \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\"" Jul 2 01:54:52.255950 env[1557]: time="2024-07-02T01:54:52.255911124Z" level=info msg="RemoveContainer for \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\" returns successfully" Jul 2 01:54:52.256697 kubelet[2702]: I0702 01:54:52.256555 2702 scope.go:117] "RemoveContainer" containerID="f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b" Jul 2 01:54:52.257625 env[1557]: time="2024-07-02T01:54:52.256859106Z" level=error msg="ContainerStatus for \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\": not found" Jul 2 01:54:52.258205 kubelet[2702]: E0702 01:54:52.258019 2702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\": not found" containerID="f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b" Jul 2 01:54:52.258205 kubelet[2702]: I0702 01:54:52.258067 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b"} err="failed to get container status \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9f32032b9aeb28bb4c62fa50d75c396f9b74ce10262536f1206d04cf5eb5c8b\": not found" Jul 2 01:54:52.258205 kubelet[2702]: I0702 01:54:52.258100 2702 scope.go:117] "RemoveContainer" containerID="5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a" Jul 2 01:54:52.258880 env[1557]: time="2024-07-02T01:54:52.258820911Z" level=error msg="ContainerStatus for \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\": not found" Jul 2 01:54:52.259425 kubelet[2702]: E0702 01:54:52.259384 2702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\": not found" containerID="5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a" Jul 2 01:54:52.259663 kubelet[2702]: I0702 01:54:52.259534 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a"} err="failed to get container status \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c00f32b9e71d6d6c874abebae3b84a559a4cd18fb9665bb487a4a0a00eace5a\": not found" Jul 2 01:54:52.259663 kubelet[2702]: I0702 01:54:52.259555 2702 scope.go:117] "RemoveContainer" containerID="abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7" Jul 2 01:54:52.259962 env[1557]: time="2024-07-02T01:54:52.259911656Z" level=error msg="ContainerStatus for \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\": not found" Jul 2 01:54:52.260206 kubelet[2702]: E0702 01:54:52.260173 2702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\": not found" containerID="abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7" Jul 2 01:54:52.260206 kubelet[2702]: I0702 01:54:52.260210 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7"} err="failed to get container status \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"abe42e7384b6cc0de4723bf29b9255379f3848cb200f8922e4e6799fe43649f7\": not found" Jul 2 01:54:52.260365 kubelet[2702]: I0702 01:54:52.260230 2702 scope.go:117] "RemoveContainer" containerID="2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d" Jul 2 01:54:52.260785 env[1557]: time="2024-07-02T01:54:52.260580631Z" level=error msg="ContainerStatus for \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\": not found" Jul 2 01:54:52.261058 kubelet[2702]: E0702 01:54:52.261029 2702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\": not found" containerID="2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d" Jul 2 01:54:52.261128 kubelet[2702]: I0702 01:54:52.261064 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d"} err="failed to get container status \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d9b47791ed341240ad0d70b06deb451026a5ba18016eba149cc8be67000e23d\": not found" Jul 2 01:54:52.261128 kubelet[2702]: I0702 01:54:52.261077 2702 scope.go:117] "RemoveContainer" containerID="faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6" Jul 2 01:54:52.261315 env[1557]: time="2024-07-02T01:54:52.261247206Z" level=error msg="ContainerStatus for \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\": not found" Jul 2 01:54:52.261523 kubelet[2702]: E0702 01:54:52.261471 2702 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\": not found" containerID="faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6" Jul 2 01:54:52.261523 kubelet[2702]: I0702 01:54:52.261514 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6"} err="failed to get container status \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"faefd2f4bf4cbcfaa1ec97706cea9562649a2b3ac2347ffa4296c92f3afc24e6\": not found" Jul 2 01:54:52.312162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8-rootfs.mount: Deactivated successfully. Jul 2 01:54:52.312301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a-rootfs.mount: Deactivated successfully. Jul 2 01:54:52.312395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a-shm.mount: Deactivated successfully. Jul 2 01:54:52.312478 systemd[1]: var-lib-kubelet-pods-96870388\x2d9737\x2d4248\x2d94c0\x2d2038639d3961-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw6dwm.mount: Deactivated successfully. Jul 2 01:54:52.312561 systemd[1]: var-lib-kubelet-pods-e14df043\x2d3e6d\x2d4010\x2d837c\x2db5c23edbf10b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4pmgn.mount: Deactivated successfully. Jul 2 01:54:52.312662 systemd[1]: var-lib-kubelet-pods-e14df043\x2d3e6d\x2d4010\x2d837c\x2db5c23edbf10b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 01:54:52.312751 systemd[1]: var-lib-kubelet-pods-e14df043\x2d3e6d\x2d4010\x2d837c\x2db5c23edbf10b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 01:54:53.344997 sshd[4237]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:53.347805 systemd[1]: sshd@20-10.200.20.41:22-10.200.16.10:58816.service: Deactivated successfully. Jul 2 01:54:53.349181 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 01:54:53.349570 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Jul 2 01:54:53.351001 systemd-logind[1543]: Removed session 23. Jul 2 01:54:53.415113 systemd[1]: Started sshd@21-10.200.20.41:22-10.200.16.10:34200.service. Jul 2 01:54:53.768241 kubelet[2702]: I0702 01:54:53.768212 2702 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="96870388-9737-4248-94c0-2038639d3961" path="/var/lib/kubelet/pods/96870388-9737-4248-94c0-2038639d3961/volumes" Jul 2 01:54:53.768662 kubelet[2702]: I0702 01:54:53.768643 2702 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" path="/var/lib/kubelet/pods/e14df043-3e6d-4010-837c-b5c23edbf10b/volumes" Jul 2 01:54:53.845751 sshd[4405]: Accepted publickey for core from 10.200.16.10 port 34200 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:53.847496 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:53.851838 systemd[1]: Started session-24.scope. Jul 2 01:54:53.852827 systemd-logind[1543]: New session 24 of user core. Jul 2 01:54:55.167718 kubelet[2702]: I0702 01:54:55.167683 2702 topology_manager.go:215] "Topology Admit Handler" podUID="e8f68080-0be2-4266-9efa-7a1c508e26ee" podNamespace="kube-system" podName="cilium-n2vxz" Jul 2 01:54:55.168205 kubelet[2702]: E0702 01:54:55.168189 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" containerName="apply-sysctl-overwrites" Jul 2 01:54:55.168306 kubelet[2702]: E0702 01:54:55.168296 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" containerName="mount-bpf-fs" Jul 2 01:54:55.168376 kubelet[2702]: E0702 01:54:55.168368 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" containerName="clean-cilium-state" Jul 2 01:54:55.168453 kubelet[2702]: E0702 01:54:55.168433 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" containerName="cilium-agent" Jul 2 01:54:55.168510 kubelet[2702]: E0702 01:54:55.168501 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" containerName="mount-cgroup" Jul 2 01:54:55.168572 kubelet[2702]: E0702 01:54:55.168562 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96870388-9737-4248-94c0-2038639d3961" containerName="cilium-operator" Jul 2 01:54:55.168664 kubelet[2702]: I0702 01:54:55.168654 2702 memory_manager.go:346] "RemoveStaleState removing state" podUID="96870388-9737-4248-94c0-2038639d3961" containerName="cilium-operator" Jul 2 01:54:55.168747 kubelet[2702]: I0702 01:54:55.168737 2702 memory_manager.go:346] "RemoveStaleState removing state" podUID="e14df043-3e6d-4010-837c-b5c23edbf10b" containerName="cilium-agent" Jul 2 01:54:55.218727 sshd[4405]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:55.222206 systemd[1]: sshd@21-10.200.20.41:22-10.200.16.10:34200.service: Deactivated successfully. Jul 2 01:54:55.224036 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 01:54:55.224827 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Jul 2 01:54:55.225873 systemd-logind[1543]: Removed session 24. Jul 2 01:54:55.281799 kubelet[2702]: I0702 01:54:55.281762 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t527n\" (UniqueName: \"kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-kube-api-access-t527n\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282070 kubelet[2702]: I0702 01:54:55.282052 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-lib-modules\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282185 kubelet[2702]: I0702 01:54:55.282174 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cni-path\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282280 kubelet[2702]: I0702 01:54:55.282271 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-etc-cni-netd\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282382 kubelet[2702]: I0702 01:54:55.282372 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-ipsec-secrets\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282492 kubelet[2702]: I0702 01:54:55.282482 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-bpf-maps\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282588 kubelet[2702]: I0702 01:54:55.282577 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-hubble-tls\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282717 kubelet[2702]: I0702 01:54:55.282706 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-xtables-lock\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282813 kubelet[2702]: I0702 01:54:55.282803 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-hostproc\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.282917 kubelet[2702]: I0702 01:54:55.282906 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-net\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.283036 kubelet[2702]: I0702 01:54:55.283024 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-clustermesh-secrets\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.283148 kubelet[2702]: I0702 01:54:55.283136 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-config-path\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.283250 kubelet[2702]: I0702 01:54:55.283240 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-kernel\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.283350 kubelet[2702]: I0702 01:54:55.283340 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-run\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.283439 kubelet[2702]: I0702 01:54:55.283430 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-cgroup\") pod \"cilium-n2vxz\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " pod="kube-system/cilium-n2vxz" Jul 2 01:54:55.293322 systemd[1]: Started sshd@22-10.200.20.41:22-10.200.16.10:34208.service. Jul 2 01:54:55.473056 env[1557]: time="2024-07-02T01:54:55.472341986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2vxz,Uid:e8f68080-0be2-4266-9efa-7a1c508e26ee,Namespace:kube-system,Attempt:0,}" Jul 2 01:54:55.513300 env[1557]: time="2024-07-02T01:54:55.513231102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:54:55.513590 env[1557]: time="2024-07-02T01:54:55.513564548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:54:55.513720 env[1557]: time="2024-07-02T01:54:55.513698471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:54:55.514012 env[1557]: time="2024-07-02T01:54:55.513983637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156 pid=4431 runtime=io.containerd.runc.v2 Jul 2 01:54:55.547872 env[1557]: time="2024-07-02T01:54:55.547820368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2vxz,Uid:e8f68080-0be2-4266-9efa-7a1c508e26ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\"" Jul 2 01:54:55.551785 env[1557]: time="2024-07-02T01:54:55.551736328Z" level=info msg="CreateContainer within sandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 01:54:55.589148 env[1557]: time="2024-07-02T01:54:55.589076211Z" level=info msg="CreateContainer within sandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443\"" Jul 2 01:54:55.589820 env[1557]: time="2024-07-02T01:54:55.589797026Z" level=info msg="StartContainer for \"021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443\"" Jul 2 01:54:55.638378 env[1557]: time="2024-07-02T01:54:55.638332618Z" level=info msg="StartContainer for \"021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443\" returns successfully" Jul 2 01:54:55.708524 env[1557]: time="2024-07-02T01:54:55.708466090Z" level=info msg="shim disconnected" id=021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443 Jul 2 01:54:55.708524 env[1557]: time="2024-07-02T01:54:55.708516892Z" level=warning msg="cleaning up after shim disconnected" id=021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443 namespace=k8s.io Jul 2 01:54:55.708524 env[1557]: time="2024-07-02T01:54:55.708525852Z" level=info msg="cleaning up dead shim" Jul 2 01:54:55.715401 env[1557]: time="2024-07-02T01:54:55.715355231Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4513 runtime=io.containerd.runc.v2\n" Jul 2 01:54:55.726363 sshd[4417]: Accepted publickey for core from 10.200.16.10 port 34208 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:55.727790 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:55.733187 systemd[1]: Started session-25.scope. Jul 2 01:54:55.733392 systemd-logind[1543]: New session 25 of user core. Jul 2 01:54:55.895395 kubelet[2702]: E0702 01:54:55.895359 2702 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 01:54:56.133473 sshd[4417]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:56.136410 systemd[1]: sshd@22-10.200.20.41:22-10.200.16.10:34208.service: Deactivated successfully. Jul 2 01:54:56.137919 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Jul 2 01:54:56.138158 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 01:54:56.142678 systemd-logind[1543]: Removed session 25. Jul 2 01:54:56.197274 env[1557]: time="2024-07-02T01:54:56.197185564Z" level=info msg="StopPodSandbox for \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\"" Jul 2 01:54:56.197515 env[1557]: time="2024-07-02T01:54:56.197491730Z" level=info msg="Container to stop \"021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:56.208325 systemd[1]: Started sshd@23-10.200.20.41:22-10.200.16.10:34214.service. Jul 2 01:54:56.279497 env[1557]: time="2024-07-02T01:54:56.279450341Z" level=info msg="shim disconnected" id=0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156 Jul 2 01:54:56.279811 env[1557]: time="2024-07-02T01:54:56.279789588Z" level=warning msg="cleaning up after shim disconnected" id=0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156 namespace=k8s.io Jul 2 01:54:56.279895 env[1557]: time="2024-07-02T01:54:56.279881829Z" level=info msg="cleaning up dead shim" Jul 2 01:54:56.288081 env[1557]: time="2024-07-02T01:54:56.288038950Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4558 runtime=io.containerd.runc.v2\n" Jul 2 01:54:56.288751 env[1557]: time="2024-07-02T01:54:56.288723923Z" level=info msg="TearDown network for sandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" successfully" Jul 2 01:54:56.288865 env[1557]: time="2024-07-02T01:54:56.288847846Z" level=info msg="StopPodSandbox for \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" returns successfully" Jul 2 01:54:56.389573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156-shm.mount: Deactivated successfully. Jul 2 01:54:56.390837 kubelet[2702]: I0702 01:54:56.390772 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-xtables-lock\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.390837 kubelet[2702]: I0702 01:54:56.390814 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-net\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391131 kubelet[2702]: I0702 01:54:56.390842 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-ipsec-secrets\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391131 kubelet[2702]: I0702 01:54:56.390885 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-hostproc\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391131 kubelet[2702]: I0702 01:54:56.390910 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-clustermesh-secrets\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391131 kubelet[2702]: I0702 01:54:56.390930 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-config-path\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391131 kubelet[2702]: I0702 01:54:56.390946 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-run\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391131 kubelet[2702]: I0702 01:54:56.390974 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t527n\" (UniqueName: \"kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-kube-api-access-t527n\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391266 kubelet[2702]: I0702 01:54:56.390995 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-etc-cni-netd\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391266 kubelet[2702]: I0702 01:54:56.391010 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-bpf-maps\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391266 kubelet[2702]: I0702 01:54:56.391027 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-lib-modules\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391266 kubelet[2702]: I0702 01:54:56.391048 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-kernel\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391266 kubelet[2702]: I0702 01:54:56.391064 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-cgroup\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391266 kubelet[2702]: I0702 01:54:56.391083 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-hubble-tls\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391396 kubelet[2702]: I0702 01:54:56.391100 2702 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cni-path\") pod \"e8f68080-0be2-4266-9efa-7a1c508e26ee\" (UID: \"e8f68080-0be2-4266-9efa-7a1c508e26ee\") " Jul 2 01:54:56.391396 kubelet[2702]: I0702 01:54:56.391155 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cni-path" (OuterVolumeSpecName: "cni-path") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.391396 kubelet[2702]: I0702 01:54:56.391182 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.391396 kubelet[2702]: I0702 01:54:56.391199 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.391544 kubelet[2702]: I0702 01:54:56.391524 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.391675 kubelet[2702]: I0702 01:54:56.391660 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-hostproc" (OuterVolumeSpecName: "hostproc") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.392031 kubelet[2702]: I0702 01:54:56.392010 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.392153 kubelet[2702]: I0702 01:54:56.392140 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.392249 kubelet[2702]: I0702 01:54:56.392237 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.392346 kubelet[2702]: I0702 01:54:56.392334 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.394700 kubelet[2702]: I0702 01:54:56.394650 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 01:54:56.394777 kubelet[2702]: I0702 01:54:56.394717 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:56.397314 systemd[1]: var-lib-kubelet-pods-e8f68080\x2d0be2\x2d4266\x2d9efa\x2d7a1c508e26ee-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 01:54:56.398827 kubelet[2702]: I0702 01:54:56.398786 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 01:54:56.400840 systemd[1]: var-lib-kubelet-pods-e8f68080\x2d0be2\x2d4266\x2d9efa\x2d7a1c508e26ee-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 01:54:56.400964 systemd[1]: var-lib-kubelet-pods-e8f68080\x2d0be2\x2d4266\x2d9efa\x2d7a1c508e26ee-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 01:54:56.404409 kubelet[2702]: I0702 01:54:56.404385 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 01:54:56.404649 kubelet[2702]: I0702 01:54:56.404616 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:56.406977 systemd[1]: var-lib-kubelet-pods-e8f68080\x2d0be2\x2d4266\x2d9efa\x2d7a1c508e26ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt527n.mount: Deactivated successfully. Jul 2 01:54:56.408122 kubelet[2702]: I0702 01:54:56.408088 2702 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-kube-api-access-t527n" (OuterVolumeSpecName: "kube-api-access-t527n") pod "e8f68080-0be2-4266-9efa-7a1c508e26ee" (UID: "e8f68080-0be2-4266-9efa-7a1c508e26ee"). InnerVolumeSpecName "kube-api-access-t527n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:56.491399 kubelet[2702]: I0702 01:54:56.491369 2702 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-bpf-maps\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.491646 kubelet[2702]: I0702 01:54:56.491631 2702 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-etc-cni-netd\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.491745 kubelet[2702]: I0702 01:54:56.491732 2702 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.491815 kubelet[2702]: I0702 01:54:56.491807 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-cgroup\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.491876 kubelet[2702]: I0702 01:54:56.491869 2702 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-lib-modules\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.491945 kubelet[2702]: I0702 01:54:56.491938 2702 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-hubble-tls\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492006 kubelet[2702]: I0702 01:54:56.491999 2702 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cni-path\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492074 kubelet[2702]: I0702 01:54:56.492066 2702 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-xtables-lock\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492137 kubelet[2702]: I0702 01:54:56.492129 2702 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-host-proc-sys-net\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492204 kubelet[2702]: I0702 01:54:56.492195 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492271 kubelet[2702]: I0702 01:54:56.492263 2702 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-hostproc\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492338 kubelet[2702]: I0702 01:54:56.492330 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-run\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492401 kubelet[2702]: I0702 01:54:56.492392 2702 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8f68080-0be2-4266-9efa-7a1c508e26ee-clustermesh-secrets\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492469 kubelet[2702]: I0702 01:54:56.492461 2702 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8f68080-0be2-4266-9efa-7a1c508e26ee-cilium-config-path\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.492536 kubelet[2702]: I0702 01:54:56.492528 2702 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t527n\" (UniqueName: \"kubernetes.io/projected/e8f68080-0be2-4266-9efa-7a1c508e26ee-kube-api-access-t527n\") on node \"ci-3510.3.5-a-637f296955\" DevicePath \"\"" Jul 2 01:54:56.643043 sshd[4541]: Accepted publickey for core from 10.200.16.10 port 34214 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:56.645272 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:56.649533 systemd[1]: Started session-26.scope. Jul 2 01:54:56.649754 systemd-logind[1543]: New session 26 of user core. Jul 2 01:54:57.198513 kubelet[2702]: I0702 01:54:57.198480 2702 scope.go:117] "RemoveContainer" containerID="021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443" Jul 2 01:54:57.201722 env[1557]: time="2024-07-02T01:54:57.201527435Z" level=info msg="RemoveContainer for \"021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443\"" Jul 2 01:54:57.219120 env[1557]: time="2024-07-02T01:54:57.218998805Z" level=info msg="RemoveContainer for \"021df9b3173dfbb75c45c1864a404e3a54dc5629e9116fa28d0c3b1bb6fe4443\" returns successfully" Jul 2 01:54:57.238017 kubelet[2702]: I0702 01:54:57.237979 2702 topology_manager.go:215] "Topology Admit Handler" podUID="3ef1e884-3cab-4ce9-a2fd-2159e1fda189" podNamespace="kube-system" podName="cilium-wrhjk" Jul 2 01:54:57.238264 kubelet[2702]: E0702 01:54:57.238249 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e8f68080-0be2-4266-9efa-7a1c508e26ee" containerName="mount-cgroup" Jul 2 01:54:57.238353 kubelet[2702]: I0702 01:54:57.238342 2702 memory_manager.go:346] "RemoveStaleState removing state" podUID="e8f68080-0be2-4266-9efa-7a1c508e26ee" containerName="mount-cgroup" Jul 2 01:54:57.397617 kubelet[2702]: I0702 01:54:57.397574 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-cilium-config-path\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.397968 kubelet[2702]: I0702 01:54:57.397631 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-cni-path\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.397968 kubelet[2702]: I0702 01:54:57.397652 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-clustermesh-secrets\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.397968 kubelet[2702]: I0702 01:54:57.397671 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-cilium-ipsec-secrets\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.397968 kubelet[2702]: I0702 01:54:57.397695 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-cilium-cgroup\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.397968 kubelet[2702]: I0702 01:54:57.397713 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-host-proc-sys-kernel\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.397968 kubelet[2702]: I0702 01:54:57.397757 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-etc-cni-netd\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398119 kubelet[2702]: I0702 01:54:57.397777 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-xtables-lock\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398119 kubelet[2702]: I0702 01:54:57.397795 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-bpf-maps\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398119 kubelet[2702]: I0702 01:54:57.397815 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-cilium-run\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398119 kubelet[2702]: I0702 01:54:57.397836 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvvh\" (UniqueName: \"kubernetes.io/projected/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-kube-api-access-mvvvh\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398119 kubelet[2702]: I0702 01:54:57.397855 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-host-proc-sys-net\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398119 kubelet[2702]: I0702 01:54:57.397873 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-hubble-tls\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398251 kubelet[2702]: I0702 01:54:57.397891 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-hostproc\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.398251 kubelet[2702]: I0702 01:54:57.397908 2702 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ef1e884-3cab-4ce9-a2fd-2159e1fda189-lib-modules\") pod \"cilium-wrhjk\" (UID: \"3ef1e884-3cab-4ce9-a2fd-2159e1fda189\") " pod="kube-system/cilium-wrhjk" Jul 2 01:54:57.543857 env[1557]: time="2024-07-02T01:54:57.543508698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrhjk,Uid:3ef1e884-3cab-4ce9-a2fd-2159e1fda189,Namespace:kube-system,Attempt:0,}" Jul 2 01:54:57.579506 env[1557]: time="2024-07-02T01:54:57.579334215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:54:57.579506 env[1557]: time="2024-07-02T01:54:57.579371176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:54:57.579506 env[1557]: time="2024-07-02T01:54:57.579381136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:54:57.579852 env[1557]: time="2024-07-02T01:54:57.579790584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf pid=4595 runtime=io.containerd.runc.v2 Jul 2 01:54:57.611904 env[1557]: time="2024-07-02T01:54:57.611817709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrhjk,Uid:3ef1e884-3cab-4ce9-a2fd-2159e1fda189,Namespace:kube-system,Attempt:0,} returns sandbox id \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\"" Jul 2 01:54:57.615733 env[1557]: time="2024-07-02T01:54:57.615680022Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 01:54:57.652638 env[1557]: time="2024-07-02T01:54:57.652575239Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b1d595abd2d29575a878ebef493642cf0fc1e0d5e7ffcca711f4f64dbdab344\"" Jul 2 01:54:57.653999 env[1557]: time="2024-07-02T01:54:57.653966586Z" level=info msg="StartContainer for \"9b1d595abd2d29575a878ebef493642cf0fc1e0d5e7ffcca711f4f64dbdab344\"" Jul 2 01:54:57.714738 env[1557]: time="2024-07-02T01:54:57.714692573Z" level=info msg="StartContainer for \"9b1d595abd2d29575a878ebef493642cf0fc1e0d5e7ffcca711f4f64dbdab344\" returns successfully" Jul 2 01:54:57.768694 kubelet[2702]: I0702 01:54:57.768392 2702 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e8f68080-0be2-4266-9efa-7a1c508e26ee" path="/var/lib/kubelet/pods/e8f68080-0be2-4266-9efa-7a1c508e26ee/volumes" Jul 2 01:54:57.794705 env[1557]: time="2024-07-02T01:54:57.794564723Z" level=info msg="shim disconnected" id=9b1d595abd2d29575a878ebef493642cf0fc1e0d5e7ffcca711f4f64dbdab344 Jul 2 01:54:57.794705 env[1557]: time="2024-07-02T01:54:57.794631804Z" level=warning msg="cleaning up after shim disconnected" id=9b1d595abd2d29575a878ebef493642cf0fc1e0d5e7ffcca711f4f64dbdab344 namespace=k8s.io Jul 2 01:54:57.794705 env[1557]: time="2024-07-02T01:54:57.794642364Z" level=info msg="cleaning up dead shim" Jul 2 01:54:57.801701 env[1557]: time="2024-07-02T01:54:57.801654097Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4676 runtime=io.containerd.runc.v2\n" Jul 2 01:54:58.205209 env[1557]: time="2024-07-02T01:54:58.205067088Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 01:54:58.249577 env[1557]: time="2024-07-02T01:54:58.249517575Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"282d76e7d1257a6bf450eeaf4b5cb6462ef9350c3f05b49df697ccec3eccc0fd\"" Jul 2 01:54:58.250290 env[1557]: time="2024-07-02T01:54:58.250212587Z" level=info msg="StartContainer for \"282d76e7d1257a6bf450eeaf4b5cb6462ef9350c3f05b49df697ccec3eccc0fd\"" Jul 2 01:54:58.303572 env[1557]: time="2024-07-02T01:54:58.303530075Z" level=info msg="StartContainer for \"282d76e7d1257a6bf450eeaf4b5cb6462ef9350c3f05b49df697ccec3eccc0fd\" returns successfully" Jul 2 01:54:58.336068 env[1557]: time="2024-07-02T01:54:58.336015505Z" level=info msg="shim disconnected" id=282d76e7d1257a6bf450eeaf4b5cb6462ef9350c3f05b49df697ccec3eccc0fd Jul 2 01:54:58.336068 env[1557]: time="2024-07-02T01:54:58.336060305Z" level=warning msg="cleaning up after shim disconnected" id=282d76e7d1257a6bf450eeaf4b5cb6462ef9350c3f05b49df697ccec3eccc0fd namespace=k8s.io Jul 2 01:54:58.336068 env[1557]: time="2024-07-02T01:54:58.336069025Z" level=info msg="cleaning up dead shim" Jul 2 01:54:58.343422 env[1557]: time="2024-07-02T01:54:58.343376158Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4739 runtime=io.containerd.runc.v2\n" Jul 2 01:54:59.207831 env[1557]: time="2024-07-02T01:54:59.207793855Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 01:54:59.237021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499453275.mount: Deactivated successfully. Jul 2 01:54:59.244142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073551505.mount: Deactivated successfully. Jul 2 01:54:59.255841 env[1557]: time="2024-07-02T01:54:59.255785811Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9e1c7435d9130ad967ad4492887f0a04c473a7cf7c6c160ca768af952f76a60\"" Jul 2 01:54:59.257823 env[1557]: time="2024-07-02T01:54:59.256425062Z" level=info msg="StartContainer for \"e9e1c7435d9130ad967ad4492887f0a04c473a7cf7c6c160ca768af952f76a60\"" Jul 2 01:54:59.312513 env[1557]: time="2024-07-02T01:54:59.312468238Z" level=info msg="StartContainer for \"e9e1c7435d9130ad967ad4492887f0a04c473a7cf7c6c160ca768af952f76a60\" returns successfully" Jul 2 01:54:59.344823 env[1557]: time="2024-07-02T01:54:59.344777320Z" level=info msg="shim disconnected" id=e9e1c7435d9130ad967ad4492887f0a04c473a7cf7c6c160ca768af952f76a60 Jul 2 01:54:59.345125 env[1557]: time="2024-07-02T01:54:59.345079565Z" level=warning msg="cleaning up after shim disconnected" id=e9e1c7435d9130ad967ad4492887f0a04c473a7cf7c6c160ca768af952f76a60 namespace=k8s.io Jul 2 01:54:59.345209 env[1557]: time="2024-07-02T01:54:59.345196447Z" level=info msg="cleaning up dead shim" Jul 2 01:54:59.351922 env[1557]: time="2024-07-02T01:54:59.351880484Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4798 runtime=io.containerd.runc.v2\n" Jul 2 01:54:59.863294 kubelet[2702]: I0702 01:54:59.863271 2702 setters.go:552] "Node became not ready" node="ci-3510.3.5-a-637f296955" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T01:54:59Z","lastTransitionTime":"2024-07-02T01:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 01:55:00.214272 env[1557]: time="2024-07-02T01:55:00.213442732Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 01:55:00.242158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004295881.mount: Deactivated successfully. Jul 2 01:55:00.250008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643088403.mount: Deactivated successfully. Jul 2 01:55:00.265052 env[1557]: time="2024-07-02T01:55:00.265000512Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bea10543937f92c34cbfe3842650a4e56816610776d9c28454acbe75a56135b4\"" Jul 2 01:55:00.267478 env[1557]: time="2024-07-02T01:55:00.267449273Z" level=info msg="StartContainer for \"bea10543937f92c34cbfe3842650a4e56816610776d9c28454acbe75a56135b4\"" Jul 2 01:55:00.315335 env[1557]: time="2024-07-02T01:55:00.315291352Z" level=info msg="StartContainer for \"bea10543937f92c34cbfe3842650a4e56816610776d9c28454acbe75a56135b4\" returns successfully" Jul 2 01:55:00.341062 env[1557]: time="2024-07-02T01:55:00.341006181Z" level=info msg="shim disconnected" id=bea10543937f92c34cbfe3842650a4e56816610776d9c28454acbe75a56135b4 Jul 2 01:55:00.341254 env[1557]: time="2024-07-02T01:55:00.341074662Z" level=warning msg="cleaning up after shim disconnected" id=bea10543937f92c34cbfe3842650a4e56816610776d9c28454acbe75a56135b4 namespace=k8s.io Jul 2 01:55:00.341254 env[1557]: time="2024-07-02T01:55:00.341086462Z" level=info msg="cleaning up dead shim" Jul 2 01:55:00.348867 env[1557]: time="2024-07-02T01:55:00.348819391Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:55:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4855 runtime=io.containerd.runc.v2\n" Jul 2 01:55:00.896519 kubelet[2702]: E0702 01:55:00.896485 2702 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 01:55:01.219279 env[1557]: time="2024-07-02T01:55:01.218981037Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 01:55:01.254591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4094131211.mount: Deactivated successfully. Jul 2 01:55:01.263972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839783344.mount: Deactivated successfully. Jul 2 01:55:01.273458 env[1557]: time="2024-07-02T01:55:01.273410706Z" level=info msg="CreateContainer within sandbox \"e759e108f04429477093af8efb95068fbbd43aefa8a6a275e94afb74af101daf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0\"" Jul 2 01:55:01.275929 env[1557]: time="2024-07-02T01:55:01.275888626Z" level=info msg="StartContainer for \"3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0\"" Jul 2 01:55:01.327648 env[1557]: time="2024-07-02T01:55:01.327591212Z" level=info msg="StartContainer for \"3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0\" returns successfully" Jul 2 01:55:01.733987 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 01:55:02.234810 kubelet[2702]: I0702 01:55:02.234773 2702 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wrhjk" podStartSLOduration=5.234730938 podCreationTimestamp="2024-07-02 01:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:55:02.23418541 +0000 UTC m=+216.604279628" watchObservedRunningTime="2024-07-02 01:55:02.234730938 +0000 UTC m=+216.604825156" Jul 2 01:55:03.106993 systemd[1]: run-containerd-runc-k8s.io-3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0-runc.LrrGau.mount: Deactivated successfully. Jul 2 01:55:04.337261 systemd-networkd[1730]: lxc_health: Link UP Jul 2 01:55:04.381688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 01:55:04.381889 systemd-networkd[1730]: lxc_health: Gained carrier Jul 2 01:55:05.261729 systemd[1]: run-containerd-runc-k8s.io-3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0-runc.TQZU6U.mount: Deactivated successfully. Jul 2 01:55:06.451738 systemd-networkd[1730]: lxc_health: Gained IPv6LL Jul 2 01:55:07.473883 systemd[1]: run-containerd-runc-k8s.io-3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0-runc.OKCOnG.mount: Deactivated successfully. Jul 2 01:55:09.630806 systemd[1]: run-containerd-runc-k8s.io-3f1510976e005c450b8c052b92ca7174833b1abcaa9f6b985880eff64852dbf0-runc.I5S3tT.mount: Deactivated successfully. Jul 2 01:55:09.765101 sshd[4541]: pam_unix(sshd:session): session closed for user core Jul 2 01:55:09.768570 systemd[1]: sshd@23-10.200.20.41:22-10.200.16.10:34214.service: Deactivated successfully. Jul 2 01:55:09.769383 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 01:55:09.770448 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Jul 2 01:55:09.771482 systemd-logind[1543]: Removed session 26. Jul 2 01:55:23.547159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc-rootfs.mount: Deactivated successfully. Jul 2 01:55:23.652823 env[1557]: time="2024-07-02T01:55:23.652776639Z" level=info msg="shim disconnected" id=5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc Jul 2 01:55:23.653311 env[1557]: time="2024-07-02T01:55:23.653288320Z" level=warning msg="cleaning up after shim disconnected" id=5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc namespace=k8s.io Jul 2 01:55:23.653392 env[1557]: time="2024-07-02T01:55:23.653378161Z" level=info msg="cleaning up dead shim" Jul 2 01:55:23.661128 env[1557]: time="2024-07-02T01:55:23.661085342Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:55:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5532 runtime=io.containerd.runc.v2\n" Jul 2 01:55:24.261550 kubelet[2702]: I0702 01:55:24.261521 2702 scope.go:117] "RemoveContainer" containerID="5937db8b62353b2b043a1564a92eb57d253874a5a274ec360feadc21965e68fc" Jul 2 01:55:24.264741 env[1557]: time="2024-07-02T01:55:24.264705544Z" level=info msg="CreateContainer within sandbox \"93d17d19900d077d7736a481f4fc1a4d11c40ccd05878fdadc6c21579924235c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 01:55:24.289537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392183218.mount: Deactivated successfully. Jul 2 01:55:24.296908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007756174.mount: Deactivated successfully. Jul 2 01:55:24.307951 env[1557]: time="2024-07-02T01:55:24.307910644Z" level=info msg="CreateContainer within sandbox \"93d17d19900d077d7736a481f4fc1a4d11c40ccd05878fdadc6c21579924235c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a2df2ed3481605c4f69ccd09b047f7462f741f363e9d247c700cc4f82a19034b\"" Jul 2 01:55:24.308588 env[1557]: time="2024-07-02T01:55:24.308556965Z" level=info msg="StartContainer for \"a2df2ed3481605c4f69ccd09b047f7462f741f363e9d247c700cc4f82a19034b\"" Jul 2 01:55:24.368561 env[1557]: time="2024-07-02T01:55:24.368509783Z" level=info msg="StartContainer for \"a2df2ed3481605c4f69ccd09b047f7462f741f363e9d247c700cc4f82a19034b\" returns successfully" Jul 2 01:55:25.397189 kubelet[2702]: E0702 01:55:25.397072 2702 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.5-a-637f296955.17de42957fa74ecb", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.5-a-637f296955", UID:"48528d3091784103fe477d2561eed7f1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-637f296955"}, FirstTimestamp:time.Date(2024, time.July, 2, 1, 55, 17, 561339595, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 1, 55, 17, 561339595, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-637f296955"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.41:41640->10.200.20.33:2379: read: connection timed out' (will not retry!) Jul 2 01:55:25.754892 env[1557]: time="2024-07-02T01:55:25.754782653Z" level=info msg="StopPodSandbox for \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\"" Jul 2 01:55:25.755201 env[1557]: time="2024-07-02T01:55:25.754871853Z" level=info msg="TearDown network for sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" successfully" Jul 2 01:55:25.755201 env[1557]: time="2024-07-02T01:55:25.755179134Z" level=info msg="StopPodSandbox for \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" returns successfully" Jul 2 01:55:25.755582 env[1557]: time="2024-07-02T01:55:25.755557415Z" level=info msg="RemovePodSandbox for \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\"" Jul 2 01:55:25.755655 env[1557]: time="2024-07-02T01:55:25.755587495Z" level=info msg="Forcibly stopping sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\"" Jul 2 01:55:25.755706 env[1557]: time="2024-07-02T01:55:25.755685855Z" level=info msg="TearDown network for sandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" successfully" Jul 2 01:55:25.767207 env[1557]: time="2024-07-02T01:55:25.767173756Z" level=info msg="RemovePodSandbox \"2166ebd3c6f9c8b069e438c3ec1b2c9c13a5844b8417bf81c5d0f0ef81b6d52a\" returns successfully" Jul 2 01:55:25.767639 env[1557]: time="2024-07-02T01:55:25.767616797Z" level=info msg="StopPodSandbox for \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\"" Jul 2 01:55:25.767837 env[1557]: time="2024-07-02T01:55:25.767785837Z" level=info msg="TearDown network for sandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" successfully" Jul 2 01:55:25.767920 env[1557]: time="2024-07-02T01:55:25.767902837Z" level=info msg="StopPodSandbox for \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" returns successfully" Jul 2 01:55:25.768258 env[1557]: time="2024-07-02T01:55:25.768233558Z" level=info msg="RemovePodSandbox for \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\"" Jul 2 01:55:25.768320 env[1557]: time="2024-07-02T01:55:25.768263118Z" level=info msg="Forcibly stopping sandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\"" Jul 2 01:55:25.768352 env[1557]: time="2024-07-02T01:55:25.768323078Z" level=info msg="TearDown network for sandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" successfully" Jul 2 01:55:25.775573 env[1557]: time="2024-07-02T01:55:25.775503571Z" level=info msg="RemovePodSandbox \"3bf2f8dc79441aa3010be02dae56ccb25426cf4c6306273b01e6f6b41fd4c4b8\" returns successfully" Jul 2 01:55:25.775901 env[1557]: time="2024-07-02T01:55:25.775881172Z" level=info msg="StopPodSandbox for \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\"" Jul 2 01:55:25.776063 env[1557]: time="2024-07-02T01:55:25.776025692Z" level=info msg="TearDown network for sandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" successfully" Jul 2 01:55:25.776131 env[1557]: time="2024-07-02T01:55:25.776115852Z" level=info msg="StopPodSandbox for \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" returns successfully" Jul 2 01:55:25.776451 env[1557]: time="2024-07-02T01:55:25.776421893Z" level=info msg="RemovePodSandbox for \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\"" Jul 2 01:55:25.776502 env[1557]: time="2024-07-02T01:55:25.776450893Z" level=info msg="Forcibly stopping sandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\"" Jul 2 01:55:25.776531 env[1557]: time="2024-07-02T01:55:25.776508933Z" level=info msg="TearDown network for sandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" successfully" Jul 2 01:55:25.782431 env[1557]: time="2024-07-02T01:55:25.782388583Z" level=info msg="RemovePodSandbox \"0671656d2c64dafe730b7ab044d61f27eb51516a8465f533663fbcd9483e8156\" returns successfully" Jul 2 01:55:26.169409 kubelet[2702]: E0702 01:55:26.169219 2702 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.41:41840->10.200.20.33:2379: read: connection timed out" Jul 2 01:55:26.186532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6-rootfs.mount: Deactivated successfully. Jul 2 01:55:26.204692 env[1557]: time="2024-07-02T01:55:26.204644093Z" level=info msg="shim disconnected" id=17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6 Jul 2 01:55:26.204920 env[1557]: time="2024-07-02T01:55:26.204901453Z" level=warning msg="cleaning up after shim disconnected" id=17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6 namespace=k8s.io Jul 2 01:55:26.204985 env[1557]: time="2024-07-02T01:55:26.204972453Z" level=info msg="cleaning up dead shim" Jul 2 01:55:26.211868 env[1557]: time="2024-07-02T01:55:26.211829463Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:55:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5595 runtime=io.containerd.runc.v2\n" Jul 2 01:55:26.269414 kubelet[2702]: I0702 01:55:26.268989 2702 scope.go:117] "RemoveContainer" containerID="17e3cd4ccda0b55d9c4fef98005dbf595971e1d0f8b031f8062d720d588e99d6" Jul 2 01:55:26.270954 env[1557]: time="2024-07-02T01:55:26.270910582Z" level=info msg="CreateContainer within sandbox \"5fa96a9764a2bc7b2432634670f83d3e78712988a8989654e3326a46bedca7e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 01:55:26.296136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702479608.mount: Deactivated successfully. Jul 2 01:55:26.302750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093507084.mount: Deactivated successfully. Jul 2 01:55:26.316507 env[1557]: time="2024-07-02T01:55:26.316461003Z" level=info msg="CreateContainer within sandbox \"5fa96a9764a2bc7b2432634670f83d3e78712988a8989654e3326a46bedca7e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e25786f99d13d4dd79dd18d67220dae5e7a5da61ce0a6b99f2e4e799b0b9f7bd\"" Jul 2 01:55:26.317098 env[1557]: time="2024-07-02T01:55:26.317077603Z" level=info msg="StartContainer for \"e25786f99d13d4dd79dd18d67220dae5e7a5da61ce0a6b99f2e4e799b0b9f7bd\"" Jul 2 01:55:26.376637 env[1557]: time="2024-07-02T01:55:26.375635082Z" level=info msg="StartContainer for \"e25786f99d13d4dd79dd18d67220dae5e7a5da61ce0a6b99f2e4e799b0b9f7bd\" returns successfully" Jul 2 01:55:34.740358 kubelet[2702]: I0702 01:55:34.740326 2702 status_manager.go:853] "Failed to get status for pod" podUID="492ddebccae5acac1deb05d137a923c3" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-637f296955" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.41:41734->10.200.20.33:2379: read: connection timed out" Jul 2 01:55:36.170059 kubelet[2702]: E0702 01:55:36.169841 2702 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 01:55:44.523827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.541614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.556439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.570927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.586060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.602159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.609923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.624953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.632541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.647570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.670493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.693271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.700949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.708973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.724929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.745929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.746127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.746298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.746405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.764358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.764560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.764691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.780237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.788471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.788679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.804284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.804477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.820244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.820536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.835869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.836032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.851859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.852085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.868248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.868474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.892304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.892532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.892671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.908505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.908777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.931784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.932010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.932122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.947063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.947271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.961959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.962176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.976861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.977126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:44.999990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.000252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.000367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.015859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.016051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.031237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.031435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.046632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.055369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.055565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.070947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.071124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.086314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.086482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.101591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.113013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.113144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.124345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.132190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.140113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.147947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.186687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.201826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.229892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.239862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.255392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.277711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.277864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.277970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.278066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.286450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.301755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.309397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.325478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.339995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.340110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.340216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.340311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.348740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.348953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.371951 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.372200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.372303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.387224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.411759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.411909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.412032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.412141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.434491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.458153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.458277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.458368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.458454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.475129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.475329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.475441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.491294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.509074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.509190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.509295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.515634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.532817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.533087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.549820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.572760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.572925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.573026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.584613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.584868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.602077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.602373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.623705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.623996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.633372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.649209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.649563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.666165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.666396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.674630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.691649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.691929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.708173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.708463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.724877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.742083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.742300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.742400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.758836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.759296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.776184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.776438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.793339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.793680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.801951 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.819094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.819367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.835781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.836028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.852036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.852282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.868169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.868433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.887446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.910909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.929802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.929966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.930073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.930166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.930288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.942513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.942769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.958437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.958643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.974428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.974659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.990292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:45.990509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.006035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.006253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.021956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.030160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.030381 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.046291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.046592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.062737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.062948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.079648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.079874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.095787 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.096025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.111813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.112034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.127884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.128102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.143907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.144171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.159650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.159890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.175455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.175734 kubelet[2702]: E0702 01:55:46.175451 2702 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-637f296955?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 01:55:46.207555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.215856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.215982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.216090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.216180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.216266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.239289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.255229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.301908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.302972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.317980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.318238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.333672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.333924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.355169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.355536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.366200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.383162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.383481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.400174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.400480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.400655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.416577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.416910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.432274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.432483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.456303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.456501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.456590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.472241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.480496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.480678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.496313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.496550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.512257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.512521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.528105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.528306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.544104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.544342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.560423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.560703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.576932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.577148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.592782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.601005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.601196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.617340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.617646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.637988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.638204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.654432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.654714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.670796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.671029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.687793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.687981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.703567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.703748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.711641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.727307 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.727490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.743285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.764248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.764390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.764495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.782872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.807349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.839903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.849673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.849898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.866528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.901089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.901272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.901413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.901559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.901694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.918683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.918875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.936625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.936856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.953831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.954087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.980283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.980560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.980690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.997618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:46.997818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.014108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.014293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.031855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.032088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.048836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.049128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.067920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.068209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.085852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.086076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.103844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.104098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.120140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.143838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.143995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.144095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.144196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.168155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.185481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.209868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.210001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.210099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.210193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.210291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.219227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.235809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.267776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.267984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.268076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.268163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.268273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.271666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.288186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.288439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.304800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.306333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.321364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.321687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.338500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.338848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.356079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.386648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.386869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.386986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.387096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.398463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.398728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.415470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.415768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.432540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.432831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.448622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.448894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.464503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.480773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.505584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.522019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.538506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.538662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.538781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.538873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.538959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.539056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.539186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.554678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.554964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.570897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.571145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.587811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.611870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.612000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.612098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.612890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.628795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.660663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.661007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.661118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.661270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.661499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.676590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.684656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.693012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.709138 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.724860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.724982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.725088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.733867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.734076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.749424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.749723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.765521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.765807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.773548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.790791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.791052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.805805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.806031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.821784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.845278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.845453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.845612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.854133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.854384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.870235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.932287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.932540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.932665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.932755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.932852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.932948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.933046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.933147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.942606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:47.958331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.014382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.039538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.039889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.040049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.040196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.040424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.040574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.040749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.040888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.041047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.041179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.041326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.057097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.072668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.088588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.151550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.168347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185310 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.185920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.186012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.186107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.186197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.186289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.201425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.201697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.216971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.249143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.249332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.249426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.249512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.249610 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.265497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.265838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.281513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.281836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.297272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.297551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.313819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.314398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.330260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.338462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.338573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.353960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.378072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.378241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.378339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.378424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.394384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.394763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.410162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.434059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.465499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.465750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.465873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.465973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.466059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.466143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.466229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.481037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.481286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.496932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.497223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.521080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.521334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.521454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.537347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.537652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.553019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.569374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.600293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.600404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.600500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.600623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.600722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.609416 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.633147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.633261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.633366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.633464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.648783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.649066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.664672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.664886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.680924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.681166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.697081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.697367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.721767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.722014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.722126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.737848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.809683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.825915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.834165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.850944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.851068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.851181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.851281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.851387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:48.851475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001